• RSOME软件包中的dro模块是用于求解分布鲁棒优化问题的建模环境。这个建模环境的特色是为分布鲁棒优化建模提供了多种便利的工具，用以定义动态决策变量、含糊集合（ambiguity sets），以及基于含糊集合的最坏情况下... RSOME: Robust Stochastic Optimization Made Easy

The General Formulation for Distributionally Robust Optimization Models

The RSOME package supports optimization models that fit the general formulation below

min ⁡    sup ⁡ P ∈ F 0 E P { a 0 ⊤ ( s ~ , z ~ ) x ( s ~ ) + b 0 ⊤ y ( s ~ , z ~ ) + c 0 ( s ~ , z ~ ) } s.t.   max ⁡ s ∈ [ S ] , z ∈ Z m s { a m ⊤ ( s , z ) x ( s ) + b m ⊤ y ( s , z ) + c m ( s , z ) } ≤ 0 , ∀ m ∈ M 1 sup ⁡ P ∈ F m E P { a m ⊤ ( s ~ , z ~ ) x ( s ~ ) + b m ⊤ y ( s ~ , z ~ ) + c m ( s ~ , z ~ ) } ≤ 0 , ∀ m ∈ M 2 x i ∈ A ( C x i ) ∀ i ∈ [ I x ] y n ∈ A ‾ ( C y n , J y n ) ∀ n ∈ [ I y ] x ( s ) ∈ X s s ∈ [ S ] . \begin{aligned} \min ~~ &\sup\limits_{\mathbb{P}\in\mathcal{F}_0} \mathbb{E}_{\mathbb{P}} \left\{\pmb{a}_0^{\top}(\tilde{s}, \tilde{\pmb{z}})\pmb{x}(\tilde{s}) + \pmb{b}_0^{\top}\pmb{y}(\tilde{s}, \tilde{\pmb{z}}) + c_0(\tilde{s}, \tilde{\pmb{z}})\right\} &&\\ \text{s.t.} ~~ &\max\limits_{s\in[S], \pmb{z}\in\mathcal{Z}_{ms}}\left\{\pmb{a}_m^{\top}(s, \pmb{z})\pmb{x}(s) + \pmb{b}_m^{\top}\pmb{y}(s, \pmb{z}) + c_m(s, \pmb{z})\right\} \leq 0, && \forall m\in \mathcal{M}_1 \\ & \sup\limits_{\mathbb{P}\in\mathcal{F}_m}\mathbb{E}_{\mathbb{P}}\left\{\pmb{a}_m^{\top}(\tilde{s}, \tilde{\pmb{z}})\pmb{x}(\tilde{s}) + \pmb{b}_m^{\top}\pmb{y}(\tilde{s}, \tilde{\pmb{z}}) + c_m(\tilde{s}, \tilde{\pmb{z}})\right\} \leq 0, && \forall m\in \mathcal{M}_2 \\ & x_i \in \mathcal{A}\left(\mathcal{C}_x^i\right) && \forall i \in [I_x] \\ & y_n \in \overline{\mathcal{A}}\left(\mathcal{C}_y^n, \mathcal{J}_y^n\right) && \forall n \in [I_y] \\ & \pmb{x}(s) \in \mathcal{X}_s && s \in [S]. \\ \end{aligned}

Here, parameters of proper dimensions,

a m ( s , z ) : = a m s 0 + ∑ j ∈ [ J ] a m s j z j b m ( s ) : = b m s c m ( z ) : = c m s 0 + ∑ j ∈ [ J ] c m s j z j \begin{aligned} &\pmb{a}_m(s,\pmb{z}) := \pmb{a}_{ms}^0 + \sum\limits_{j\in[J]}\pmb{a}_{ms}^jz_j \\ &\pmb{b}_m(s) := \pmb{b}_{ms}\\ &c_m(\pmb{z}) := c_{ms}^0 + \sum\limits_{j\in[J]}c_{ms}^jz_j \end{aligned}

are defined similarly as in the case of RSOME for Robust Optimization and X s \mathcal{X}_s is an second-order cone (SOC) representable feasible set of a decision x ( s ) \pmb{x}(s) in the scenario x x . Constraints indexed by m ∈ M 1 m\in\mathcal{M}_1 are satisfied under the worst-case realization, just like those introduced in RSOME for robust optimization. Another set of constraints indexed by m ∈ M 2 m\in\mathcal{M}_2 are satisfied with regard to the worst-case expectation overall all possible distributions defined by an ambiguity set F m \mathcal{F}_m in the general form introduced in Chen et al. (2020):

F m = { P ∈ P 0 ( R J × [ S ] ) ∣   ( z ~ , s ~ ) ∼ P E P [ z ~ ∣ s ~ ∈ E k m ] ∈ Q k m ∀ k ∈ [ K ] P [ z ~ ∈ Z s m ∣ s ~ = s ] = 1 ∀ s ∈ [ S ] P [ s ~ = s ] = p s ∀ s ∈ [ S ] for some  p ∈ P m } ,     ∀ m ∈ M 2 . \begin{aligned} \mathcal{F}_m = \left\{ \mathbb{P}\in\mathcal{P}_0\left(\mathbb{R}^{J}\times[S]\right) \left| \begin{array}{ll} ~\left(\tilde{\pmb{z}}, \tilde{s}\right) \sim \mathbb{P} & \\ \mathbb{E}_{\mathbb{P}}[\tilde{\pmb{z}}|\tilde{s}\in\mathcal{E}_{km}] \in \mathcal{Q}_{km} & \forall k \in [K] \\ \mathbb{P}[\tilde{\pmb{z}}\in \mathcal{Z}_{sm}| \tilde{s}=s]=1 & \forall s \in [S] \\ \mathbb{P}[\tilde{s}=s] = p_s & \forall s \in [S] \\ \text{for some } \pmb{p} \in \mathcal{P}_m & \end{array} \right. \right\},~~~ \forall m \in \mathcal{M}_2 \end{aligned}.

Here for each constraint indexed by m ∈ M 2 m\in\mathcal{M}_2 ,

1. The conditional expectation of z ~ \tilde{\pmb{z}} over events (defined as subsets of scenarios and denoted by E k m \mathcal{E}_{km} are known to reside in an SOC representable set Q k m \mathcal{Q}_{km} ;
2. The support of z ~ \tilde{\pmb{z}} in each scenario s ∈ [ S ] s\in[S] is specified to be another SOC representable set Z s m \mathcal{Z}_{sm} ;
3. Probabilities of scenarios, collectively denoted by a vector p \pmb{p} , are constrained by a third SOC representable subset P m ⊆ { p ∈ R + + S :   e ⊤ p = 1 } \mathcal{P}_m\subseteq\left\{\pmb{p}\in \mathbb{R}_{++}^S:~ \pmb{e}^{\top}\pmb{p}=1 \right\} in the probability simplex.

Dynamics of decision-making is captured by the event-wise recourse adaptation for wait-and-see decisions of two types—-the event-wise static adaptation denoted by A ( C ) \mathcal{A}(C) as well as the event-wise affine adaptation denoted by A ‾ ( C , J ) \overline{\mathcal{A}}(\mathcal{C}, \mathcal{J}) . In particular, given a fixed number of S S scenarios and a partition C \mathcal{C} of these scenarios (i.e., a collection of mutually exclusive and collectively exhaustive events), the event-wise recourse adaptation is formally defined as follows:

A ( C ) = { x : [ S ] ↦ R ∣   x ( s ) = x E , E = H C ( s ) for some  x E ∈ R } , A ‾ ( C , J ) = { y : [ S ] × R J ↦ R ∣   y ( s , z ) = y 0 ( s ) + ∑ j ∈ J y j ( s ) z j for some  y 0 ( s ) , y j ( s ) ∈ A ( C ) , j ∈ J } . \begin{aligned} &\mathcal{A}\left(\mathcal{C}\right) = \left\{ x: [S] \mapsto \mathbb{R} \left| \begin{array}{l} ~x(s)=x^{\mathcal{E}}, \mathcal{E}=\mathcal{H}_{\mathcal{C}}(s) \\ \text{for some } x^{\mathcal{E}} \in \mathbb{R} \end{array} \right. \right\}, \\ &\overline{\mathcal{A}}\left(\mathcal{C}, \mathcal{J}\right) = \left\{ y: [S]\times\mathbb{R}^{J} \mapsto \mathbb{R} \left| \begin{array}{l} ~y(s, \pmb{z}) = y^0(s) + \sum\limits_{j\in\mathcal{J}}y^j(s)z_j \\ \text{for some }y^0(s), y^j(s) \in \mathcal{A}\left(\mathcal{C}\right), j\in\mathcal{J} \end{array} \right. \right\} \end{aligned}.

Here, H C : [ S ] ↦ C \mathcal{H}_{\mathcal{C}}: [S] \mapsto \mathcal{C} is a function such that H C = E \mathcal{H}_{\mathcal{C}}= \mathcal{E} maps the scenario s s to the only event E \mathcal{E} in C \mathcal{C} that contains s s , and J ⊆ [ J ] \mathcal{J} \subseteq [J] is an index subset of random components z ~ 1 \tilde{z}_1 ,…, z ~ J \tilde{z}_J that the affine adaptation depends on. In the remaining part of the guide, we will introduce the RSOME code for specifying the event-wise ambiguity set and recourse adaptation rules.

Introduction to the rsome.dro Environment

In general, the rsome.dro modeling environment is very similar to rsome.ro discussed in the section Introduction to the rsome.ro Environment, so almost all array operations, indexing and slicing syntax could be applied to dro models. The unique features of the dro model mainly come from the scenario-representation of uncertainties and a different way of specifying the event-wise adaptation of decision variables.

Models

Similar to the rsome.ro modeling environment, the dro models are all defined upon Model type objects, which are created by the constructor Model() imported from the sub-package rsome.dro.

class Model(builtins.object)
|  Model(scens=1, name=None)
|
|  Returns a model object with the given number of scenarios.
|
|  Parameters
|  ----------
|  scens : int or array-like objects
|      The number of scenarios, if it is an integer. It could also be
|      an array of scenario indices.
|  name : str
|      Name of the model
|
|  Returns
|  -------
|  model : rsome.dro.Model
|      A model object

It can be seen that the dro models are different from the ro models in the first argument scens, which specifies the scenario-representation of uncertainties. The scens argument can be specified as an integer, indicating the number of scenarios, or an array of scenario indices, as shown by the following sample code.

from rsome import dro

model1 = dro.Model(5)       # a DRO model with 5 scenarios

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model2 = dro.Model(labels)  # a DRO model with 5 scenarios given by a list.

Decision Variables

In the general formulation, decision variables x ( s ) \pmb{x}(s) are event-wise static and y ( s , z ) \pmb{y}(s, \pmb{z}) are event-wise affinely adaptive. The dro modeling environment does not differentiate the two in creating these variables. Both types of decision variables are created by the dvar() method of the model object, with the same syntax as the rsome.ro models.

dvar(shape=(1,), vtype='C', name=None) method of rsome.dro.Model instance
Returns an array of decision variables with the given shape
and variable type.

Parameters
----------
shape : int or tuple
Shape of the variable array.
vtype : {'C', 'B', 'I'}
Type of the decision variables. 'C' means continuous; 'B'
means binary, and 'I" means integer.
name : str
Name of the variable array

Returns
-------
new_var : rsome.lp.DecVar
An array of new decision variables

All decision variables are created to be non-adaptive at first. The event-wise and affine adaptation can be then specified by the method adapt() of the variable object. Details of specifying the adaptive decisions are provided in the section Event-wise recourse adaptations

Random Variables

The syntax of creating random variables for a dro model is exactly the same as the ro models. You may refer to the charpter RSOME for Robust Optimization for more details.

The dro model supports the expectation of random variables, so that we could define the expectation sets Q k m \mathcal{Q}_{km} in the ambiguity set F m \mathcal{F}_m . This is different from ro models. The expectation is indicated by the E() function imported from rsome, as demonstrated by the sample code below.

from rsome import dro
from rsome import E     # import E as the expectation operator

model = dro.Model()
z = model.rvar((3, 5))  # 3x5 random variables as a 2D array

E(z) <= 1               # E(z) is smaller than or equal to zero
E(z) >= -1              # E(z) is larger than or equal to -1
E(z).sum() == 0         # sum of E(z) is zero

Event-wise Ambiguity Set

Create an Ambiguity Set

Ambiguity sets F m \mathcal{F}_m of a dro model can be created by the ambiguity() method. The associated scenario indices of the ambiguity set can be accessed by the s attribute of the ambiguity set object, as shown by the following sample code.

from rsome import dro

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)     # create a model with 5 scenarios

fset = model.ambiguity()      # create an ambiguity set of the model
print(fset)                   # print scenarios of the model (ambiguity set)

Scenario indices:
sunny     0
cloudy    1
rainy     2
windy     3
snowy     4
dtype: int64

In the example above, strings in the list labels become the labels of the scenario indices. If an integer instead of an array is used to specify the scens argument of the Model() constructor, then the labels will be the same as the integer values. Similar to the pandas.Series data structure, labels of the scenario indices could be any hashable data types.

RSOME supports indexing and slicing of the scenarios via either the labels or the integer-positions, as shown by the following code segments.

print(fset)          # the third scenario

Scenario index:
2

print(fset['rainy'])    # the third scenario

Scenario index:
2

print(fset[:3])         # the first three scenarios

Scenario indices:
sunny     0
cloudy    1
rainy     2
dtype: int64

print(fset[:'rainy'])   # the first three scenarios

Scenario indices:
sunny     0
cloudy    1
rainy     2
dtype: int64

RSOME is also consistent with the pandas.Sereis data type in using the iloc and loc indexers for accessing label-based or integer-position based indices.

print(fset.iloc[:2])            # the first two scenarios via the iloc[] indexer

Scenario indices:
sunny     0
cloudy    1
dtype: int64

print(fset.loc[:'cloudy'])      # the first two scenarios via the loc[] indexer

Scenario indices:
sunny     0
cloudy    1
dtype: int64

The indices of the scenarios are crucial in defining components of the ambiguity set, such as sets Q k m \mathcal{Q}_{km} , and Z s m \mathcal{Z}_{sm} , which will be discussed next.

Q k m \mathcal{Q}_{km} as the Support of Conditional Expectations

According to the formulation of the ambiguity set F m \mathcal{F}_m presented in the section The General Formulation for Distributionally Robust Optimization Models, the SOC representable set Q k m \mathcal{Q}_{km} is defined as the support of the conditional expectation of random variables under the event E k \mathcal{E}_k , which is a collection of selected scenarios. In the RSOME package, such a collection of scenarios can be specified by the indexing or slicing of the ambiguity set object, and constraints of the Q k m \mathcal{Q}_{km} are defined by the exptset() method of the ambiguity set object. Take the supports of conditional expectations below, for example,

E [ z ~ ∣ s ∈ E 1 ] ∈ { z : R 3 ∣   ∣ z ∣ ≤ 1 ∥ z ∥ 1 ≤ 1.5 } , E 1 = { sunny , cloudy , rainy , windy , snowy } E [ z ~ ∣ s ∈ E 2 ] ∈ { z : R 3 ∣   z = 0 } , E 2 = { sunny , rainy , snowy } , \begin{aligned} &\mathbb{E}\left[\tilde{\pmb{z}}|s\in\mathcal{E}_{1}\right] \in \left\{\pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~|\pmb{z}| \leq 1 \\ \|\pmb{z}\|_1 \leq 1.5 \end{array} \right. \right\}, && \mathcal{E}_1 = \{\text{sunny}, \text{cloudy}, \text{rainy}, \text{windy}, \text{snowy}\} \\ &\mathbb{E}\left[\tilde{\pmb{z}}|s\in\mathcal{E}_{2}\right] \in \left\{\pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~\pmb{z} = 0 \end{array} \right. \right\}, && \mathcal{E}_2 = \{\text{sunny}, \text{rainy}, \text{snowy}\}, \end{aligned}

where the first event E 1 \mathcal{E}_1 is a collection of all scenarios, and the second event E 2 \mathcal{E}_2 includes scenarios “sunny”, “rainy”, and “snowy”. The conditional expectation information of the ambiguity set can be specified by the code below.

from rsome import dro
from rsome import E
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()             # create an ambiguity set
fset.exptset(abs(E(z)) <= 1,
norm(E(z), 1) <= 1.5)   # the 1st support of conditional expectations
fset.loc[::2].exptset(E(z) == 0)     # the 2nd support of conditional expectations

The ambiguity set fset itself represents the event E 1 \mathcal{E}_1 of all scenarios, and the loc indexer is used to form the event E 2 \mathcal{E}_2 with three scenarios included. Besides loc, other indexing and slicing expressions described in the previous section can also be used to construct the events for the support sets of the expectations.

Z s m \mathcal{Z}_{sm} as the Support of Random Variables

The support Z s m \mathcal{Z}_{sm} of random variables can be specified by the method suppset() method, and the scenario information of the support can also be specified by the indexing and slicing expressions of the ambiguity set object. Take the following supports of random variables z ~ ∈ R 3 \tilde{\pmb{z}}\in\mathbb{R}^3 for example,

P [ z ~ ∈ { z : R 3 ∣   ∣ z ∣ ≤ 1 ∥ z ∥ ≤ 1.5 } ∣ s ~ = s ] = 1 , ∀ s ∈ { sunny , rainy , snowy } P [ z ~ ∈ { z : R 3 ∣   ∑ j = 1 3 z j = 0 } ∣ s ~ = s ] = 1 , ∀ s ∈ { cloudy , windy } , \begin{aligned} &\mathbb{P}\left[\tilde{\pmb{z}}\in \left\{ \left. \pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~|\pmb{z}| \leq 1 \\ \|\pmb{z}\| \leq 1.5 \end{array} \right. \right\} \right| \tilde{s}=s \right]=1, &\forall s \in \{\text{sunny}, \text{rainy}, \text{snowy}\} \\ &\mathbb{P}\left[\tilde{\pmb{z}}\in \left\{ \left. \pmb{z}: \mathbb{R}^3 \left| \begin{array}{l} ~\sum\limits_{j=1}^3z_j = 0 \end{array} \right. \right\} \right| \tilde{s}=s \right]=1, &\forall s \in \{\text{cloudy}, \text{windy}\}, \end{aligned}

the RSOME code for specifying the supports can be written as follows.

from rsome import dro
from rsome import E
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()
fset.iloc[::2].suppset(abs(z) <= 1,
norm(z, 1) <= 1.5)  # the support of z in scenarios 0, 2, 4
fset.iloc[1::2].suppset(z.sum() == 0)      # the support of z in scenarios 1, 3

Note that a valid ambiguity set must have the support sets for all scenarios to be specified. An error message will be given in solving the model if any of the supports are unspecified. RSOME provides a method called showevents() to display the specified supports for random variables and their expectations in a data frame, in order to help users check their ambiguity set.

from rsome import dro
from rsome import E
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()
fset.iloc[::2].suppset(abs(z) <= 1, norm(z, 1) <= 1.5)
fset.iloc[1::2].suppset(z.sum() == 0)
fset.exptset(abs(E(z)) <= 1, norm(E(z), 1) <= 1.5)
fset.loc[::2].exptset(E(z) == 0)

fset.showevents()            # display how the ambiguity set is specified

supportexpectation 0expectation 1
sunnydefinedTrueTrue
cloudydefinedTrueFalse
rainydefinedTrueTrue
windydefinedTrueFalse
snowydefinedTrueTrue

As the example above shows, supports for all scenarios have been defined, and there are two supports defined for the conditional expectations.

P m \mathcal{P}_m as the Support of Scenario Probabilities

In the event-wise ambiguity set, the support of scenario probabilities can also be specified via the calling the method probset(), as demonstrated by the following sample code.

from rsome import dro
from rsome import norm

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)

fset = model.ambiguity()
p = model.p                         # p is the array of scenario probabilities
fset.probset(norm(p-0.2) <= 0.05)   # define the support of the array p

The scenario probabilities are formatted as a one-dimensional array, which can be accessed via the attribute p of the model object. Notice that two underlying constraints for probabilities: p ≥ 0 \pmb{p}\geq \pmb{0} and e ⊤ p = 1 \pmb{e}^{\top}\pmb{p}=1 , are already integrated in the ambiguity set, so there is no need to specify them in defining the support of scenario probabilities.

Here we introduce how the event-wise static adaptation A ( C ) \mathcal{A}(\mathcal{C}) and the event-wise affine adaptation A ‾ ( C , J ) \overline{\mathcal{A}}(\mathcal{C}, \mathcal{J}) are specified in RSOME. Note that decision variables created in the rsome.dro modeling environment are initially non-adaptive, in the sense that the event set C = { [ S ] } \mathcal{C} = \{[S]\} and the dependent set J = ∅ \mathcal{J}=\varnothing . These two sets can be modified by the adapt() method of the decision variable object for specifying the decision adaptation, as demonstrated by the following sample code.

from rsome import dro

labels = ['sunny', 'cloudy', 'rainy', 'windy', 'snowy']
model = dro.Model(labels)
z = model.rvar(3)
x = model.dvar(3)

x.adapt([2, 3])         # x adapts to the event of rainy and windy

In the code segment above, the scenario partition C \mathcal{C} is specified to have three events: { sunny } \{\text{sunny}\} , { rainy , windy } \{\text{rainy}, \text{windy}\} , and collectively the remaining scenarios { cloudy , snowy } \{\text{cloudy}, \text{snowy}\} . The decision variable x[:2] is set to be affinely adaptive to the random variable z[:2].

Similar to the linear decision rule defined in the ro module, coefficients of an event-wise recourse adaptation could be accessed by the get() method after solving the model, where

• y.get() returns the constant coefficients of the recourse adaptation. In cases of multiple scenarios, the returned object is a pandas.Series with the length to be the same as the number of scenarios. Each element of the series is an array that has the same shape as y.
• y.get(z) returns the linear coefficients of the recourse adaptation. In cases of multiple scenarios, the returned object is a pandas.Series with the length to be the same as the number of scenarios. Each element of the series is an array, and the shape of the array is y.shape + z.shape, i.e., the combination of dimensions of y and z.

The Worst-Case Expectations

Once the ambiguity sets F m \mathcal{F}_m and the recourse adaptation of decision variables are defined, the worst-case expectations in the objective function or constraints can be then specified. The ambiguity set of the objective function can be specified by the minsup() method, for minimizing, or the maxinf() method, for maximizing, the objective function involving the worst-case expectations. The documentation of the minsup() method is given below.

minsup(obj, ambset) method of rsome.dro.Model instance
Minimize the worst-case expected objective value over the given
ambiguity set.

Parameters
----------
obj
Objective function involving random variables
ambset : Ambiguity
The ambiguity set defined for the worst-case expectation

Notes
-----
The ambiguity set defined for the objective function is considered
the default ambiguity set for the distributionally robust model.

Similar to ro models, ambiguity sets of constraints indexed by m ∈ M 2 m\in\mathcal{M}_2 , concerning the worst-case expectations, can be specified by the forall() method, and if the ambiguity set of such constraints are unspecified, then by default, the underlying ambiguity set is the same as F 0 \mathcal{F}_0 defined for the objective function. As for the worst-case constraints indexed by m ∈ M 1 m\in\mathcal{M}_1 , the uncertainty set can also be specified by the forall() method, and if the uncertainty set is unspecified, such worst-case constraints are defined upon support sets Z 0 s \mathcal{Z}_{0s} of the default ambiguity set F 0 \mathcal{F}_0 , for each scenario s ∈ [ S ] s\in[S] . The sample code below demonstrates how the worst-case expectations can be formulated in the objective function and constraints.

from rsome import dro
from rsome import norm
from rsome import E

model = dro.Model()             # create a model with one scenario
z = model.rvar(3)
u = model.rvar(3)
x = model.dvar(3)

fset = model.ambiguity()        # create an ambiguity set
fset.suppset(abs(z) <= u, u <= 1,
norm(z, 1) <= 1.5)
fset.exptset(E(z) == 0, E(u) == 0.5)

model.maxinf(E(x @ z), fset)    # maximize the worst-case expectation over fset

model.st((E(x - z) <= 0))       # worst-case expectation over fset
model.st(2x >= z)               # worst-case over the support of fset

Reference

Bertsimas, Dimitris, Melvyn Sim, and Meilin Zhang. 2019. Adaptive distributionally robust optimization. Management Science 65(2) 604-618.

Chen, Zhi, Melvyn Sim, Peng Xiong. 2020. Robust stochastic optimization made easy with RSOME. Management Science 66(8) 3329–3339.

Wiesemann, Wolfram, Daniel Kuhn, Melvyn Sim. 2014. Distributionally robust convex optimization. Operations Research 62(6) 1358–1376.

展开全文  python 线性规划 动态规划求解 数学建模 机器学习
• 在这个案例中，我们用RSOME建模求解一个两阶段批量生产问题。这里我们演示如何通过定义不同形式的含糊集合来实现相应的分布鲁棒优化模型。 RSOME: Robust Stochastic Optimization Made Easy

In this section, we are using RSOME to replicate numerical case studies presented in Bertsimas et al. (2021). A capacitated network with n n locations is considered, where each location i i has an unknown demand d i d_i , and the demand can be satisfied by the existing local stock x i x_i or by transporting an amount y i j y_{ij} of units from another location j j , which is determined after the demand is realized. Such a lot-sizing problem can be written as the following two-stage formulation,

min ⁡   ∑ i = 1 N c i x i + E [ Q ( x , d ~ ) ] , s.t.  0 ≤ x i ≤ K , i = 1 , 2 , . . . , n \begin{aligned} \min~& \sum\limits_{i=1}^Nc_ix_i + \mathbb{E}\left[Q(\pmb{x}, \tilde{\pmb{d}})\right], \\ \text{s.t.}~ &0 \leq x_i \leq K, & i = 1, 2, ..., n \end{aligned}

where the recourse problem Q ( x , d ) Q(\pmb{x}, \pmb{d}) is written as

Q ( x , z ) = min ⁡   ∑ i = 1 n ∑ j = 1 n c i j y i j s.t.  x i − ∑ j = 1 N y j i + ∑ j = 1 N y i j ≥ d i i = 1 , 2 , . . . , n 0 ≤ y ≤ b . \begin{aligned} Q(\pmb{x}, \pmb{z}) = \min~& \sum\limits_{i=1}^n\sum\limits_{j=1}^nc_{ij}y_{ij} &\\ \text{s.t.}~&x_i - \sum\limits_{j=1}^Ny_{ji} + \sum\limits_{j=1}^Ny_{ij} \geq d_i & i = 1, 2, ..., n \\ &\pmb{0} \leq \pmb{y} \leq \pmb{b}. \end{aligned}

Parameters of the case studies are given below:

• The number of locations: n = 10 n=10 ;
• Cost of local stock: a i = 1 a_i=1 , i = 1 , 2 , . . . , n i=1, 2, ..., n ;
• Cost of shipping units: c i j c_{ij} is the Euclidean distance between location i i and j j .
• Stock capacity: K = 20 K=20 ;
• Transportation capacity: b i j = K / ( n − 1 ) u i j b_{ij}=K/(n-1)u_{ij} , where each u i j u_{ij} is a random variable generated from a standard uniform distribution;
• Sample size of the training set: N = 20 N=20 ;
• Robustness parameter: ϵ = 10 N − 1 / 10 \epsilon=10N^{-1/10} .
n = 10
a = 1
K = 20
N = 20
epsilon = 10 * N**(-1/10)

b = K / (n-1) / rd.rand(n, n)

xy = 10*rd.rand(2, n)
c = ((xy[] - xy[].T) ** 2
+ (xy[] - xy[].T) ** 2) ** 0.5

Here we generate a training dataset ds assuming the demand at each location follows a uniform distribution. The average sample demand at each location is shown by the following figure. The lot-sizing problem is then solved by approaches described in Bertsimas et al. (2021): sample average approximation (SAA), single-policy affine approximation (SP affine), multi-policy affine approximation (MP affine), and the Wasserstein scenario-wise adaptation (Wass SW). SAA

A direct means of implementing the SAA model is to define the recourse decision y \pmb{y} as a three-dimensional array, where the first dimension represents the sample size, and the remaining two dimensions indicate the from and to locations.

from rsome import dro
import rsome.grb_solver as grb
import rsome as rso

model = dro.Model()             # define a model
x = model.dvar(n)               # here-and-now location decisions
y = model.dvar((N, n, n))       # wait-and-see transportation decisions

# Define model objective and constraints
model.min((a*x).sum() + (1/N)*(c*y.sum(axis=0)).sum())
model.st(ds <= y.sum(axis=1) - y.sum(axis=2) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 0.0119s

Alternatively, the SAA method could be implemented by defining a scenario-wise ambiguity set F \mathbb{F} with singleton supports for each scenario,

F = { P ∈ P ( R n × [ N ] ) ∣   ( d ~ , s ) ∼ P P [ d ~ = d ^ s ∣ s ~ = 1 ] = 1 , ∀ s ∈ [ N ] P [ s ~ = s ] = 1 , ∀ s ∈ [ N ] for some  p ∈ P } \begin{aligned} \mathbb{F} = \left\{ \mathbb{P} \in \mathcal{P}(\mathbb{R}^n\times [N])\left| \begin{array}{ll} ~(\tilde{\pmb{d}}, s) \sim \mathbb{P} & \\ \mathbb{P}[\tilde{\pmb{d}} = \hat{\pmb{d}}_s|\tilde{s}=1]=1, &\forall s\in [N] \\ \mathbb{P}[\tilde{s}=s] = 1, &\forall s \in [N] \\ \text{for some }\pmb{p} \in \mathcal{P} & \end{array} \right. \right\} \end{aligned}

with d ^ s \hat{\pmb{d}}_s being each sample record of demands. Model with such an ambiguity set can be implemented by the code below.

model = dro.Model(N)    # define a model with N scenarios
x = model.dvar(n)       # here-and-now location decisions
y = model.dvar((n, n))  # wait-and-see transportation decisions
d = model.rvar(n)       # define random variables as the demand

for s in range(N):

# define the ambiguity set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d == ds[s])
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 0.0193s

The optimal stock decision provided by the SAA model is shown by the following figure. SP affine

The second data-driven approach is referred to as sample robust optimization, where an uncertainty set around each data sample is considered. Here the recourse decisions are approximated by a single-policy linear decision rule, thus the name SP affine. The ambiguity set of the sample robust model is presented below.

F = { P ∈ P ( R n × [ N ] ) ∣   ( d ~ , s ) ∼ P P [ ∥ d ~ − d ^ s ∥ ≤ ϵ ∣ s ~ = 1 ] = 1 , ∀ s ∈ [ N ] P [ s ~ = s ] = 1 , ∀ s ∈ [ N ] for some  p ∈ P } \begin{aligned} \mathbb{F} = \left\{ \mathbb{P} \in \mathcal{P}(\mathbb{R}^n\times [N])\left| \begin{array}{ll} ~(\tilde{\pmb{d}}, s) \sim \mathbb{P} & \\ \mathbb{P}[\|\tilde{\pmb{d}} - \hat{\pmb{d}}_s\|\leq \epsilon|\tilde{s}=1]=1, &\forall s\in [N] \\ \mathbb{P}[\tilde{s}=s] = 1, &\forall s \in [N] \\ \text{for some }\pmb{p} \in \mathcal{P} & \end{array} \right. \right\} \end{aligned}

The code for implementing such an ambiguity set is given below.

model = dro.Model(N)    # define a model with N scenarios
x = model.dvar(n)       # here-and-now location decisions
y = model.dvar((n, n))  # wait-and-see transportation decisions
d = model.rvar(n)       # define random variables as the demand

y.adapt(d)              # the decision rule y affinely depends on d

# define the ambiguity set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d >= 0, d <= K, rso.norm(d - ds[s]) <= epsilon)
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 38.8390s

The optimal lot-sizing decisions of the SP affine sample robust optimization method would be more conservative compared with the SAA method, as shown by the following figure. MP affine

The sample robust optimization model above could be extended to the MP affine version, where the recourse decision y \pmb{y} not only affinely depends on d \pmb{d} , but also adapts to various scenarios. The code together with the optimal allocation of stocks are presented below. It can be seen that the solution is less conservative compared with the SP affine case, because the recourse decision has a higher level of flexibility in approximating the actual recourse decisions.

model = dro.Model(N)    # define a model with N scenarios
x = model.dvar(n)       # here-and-now location decisions
y = model.dvar((n, n))  # wait-and-see transportation decisions
d = model.rvar(n)       # define random variables as the demand

y.adapt(d)              # the decision rule y affinely depends on d
for s in range(N):
y.adapt(s)          # the decision rule y also adapts to each scenario s

# define the ambiguity set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d >= 0, d <= K, rso.norm(d - ds[s]) <= epsilon)
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 19.4076s Wass SW

As pointed out by the paper Bertsimas et al. (2021), the distributionally robust optimization model considering the Wasserstein ambiguity set proposed in Chen et al. (2020) only yield one feasible first-stage decision x = ( K , K , . . . , K ) \pmb{x}=(K, K, ..., K) because this is the only solution that guarantees the feasibility of the second-stage problem over the entire support of d \pmb{d} . In order to make the numerical implementation more meaningful, we introduced an emergency order w i w_i to fill up the shortage at each location i i . The cost of the emergency order is assumed to be five times of stock cost a i a_i . The code for implementing this distributionally robust optimization model is given below.

model = dro.Model(N)    # define a model
x = model.dvar(n)       # define location decisions
y = model.dvar((n, n))  # define decision rule as the shifted quantities
w = model.dvar(n)       # define decision rule as the emergency cost
d = model.rvar(n)       # define random variables as the demand
u = model.rvar()        # define an auxiliary random variable

for s in range(N):

# define the uncertainty set
fset = model.ambiguity()
for s in range(N):
fset[s].suppset(d >= 0, d <= K, rso.norm(d - ds[s], 1) <= u)
fset.exptset(E(u) <= epsilon)
pr = model.p
fset.probset(pr == 1/N)

# define model objective and constraints
model.minsup((a*x).sum() + E((c*y).sum() + (5*a*w).sum()), fset)
model.st(d <= y.sum(axis=0) - y.sum(axis=1) + x + w)
model.st(y >= 0, y <= b)
model.st(x >= 0, x <= K)
model.st(w >= 0)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 19.1652s

The optimal lot-sizing decision is illustrated by the following figure. Reference

Bertsimas, Dimitris, Shimrit Shtern, and Bradley Sturt. 2021. Two-stage sample robust optimization. Operations Research.

Chen, Zhi, Melvyn Sim, Peng Xiong. 2020. Robust stochastic optimization made easy with RSOME. Management Science 66(8) 3329–3339.

展开全文  python 线性规划 数学建模 动态规划求解
• 在这个案例中，我们建立一个分布鲁棒优化模型用以解决最优车辆分配问题。这里我们演示了如何将历史数据中的协变量信息引入含糊集合中。 RSOME: Robust Stochastic Optimization Made Easy

Robust Vehicle Pre-Allocation with Uncertain Covariates

In this example, we consider the vehicle pre-allocation problem introduced in Hao et al. (2020). Suppose that there are I I supply nodes and J J demand nodes in an urban area. The operator, before the random demand d ~ j = ( d ~ ) j ∈ [ J ] \tilde{d}_j = (\tilde{d})_{j\in[J]} realizes, allocates x i j x_{ij} vehicles from supply node i ∈ [ I ] i\in[I] (which has a numbers i i of idle vehicles) to demand node j ∈ [ J ] j\in[J] at a unit cost c i j c_{ij} , and the revenue is calculated as ∑ j ∈ [ J ] r j min ⁡ { ∑ i ∈ [ I ] x i j , d j } \sum_{j \in [J]} r_j \min\left\{\sum_{i \in [I]} x_{ij}, d_j\right\} , as the uncertain demand is realized. Consider the demand randomness, the distributionally robust vehicle pre-allocation problem can be formulated as the following problem:

min ⁡   ∑ i ∈ [ I ] ∑ j ∈ [ J ] ( c i j − r j ) x i j + sup ⁡ P ∈ F E P [ ∑ j ∈ [ J ] r j y j ( s ~ , d ~ , u ~ ) ] s.t.  y j ( s , d , u ) ≥ ∑ i ∈ [ I ] x i j − d j ∀ ( d , u ) ∈ Z s , ∀ s ∈ [ S ] , ∀ j ∈ [ J ] y j ( s , d , u ) ≥ 0 ∀ ( d , u ) ∈ Z s , ∀ s ∈ [ S ] , ∀ j ∈ [ J ] ∑ j ∈ [ J ] x i j ≤ q i ∀ i ∈ [ I ] x i j ≥ 0 ∀ i ∈ [ I ] , ∀ j ∈ [ J ] , \begin{aligned} \min~&\sum\limits_{i\in[I]}\sum\limits_{j\in[J]}(c_{ij} - r_j)x_{ij} + \sup\limits_{\mathbb{P}\in\mathcal{F}}\mathbb{E}_{\mathbb{P}}\left[\sum\limits_{j\in[J]}r_jy_j(\tilde{s}, \tilde{\pmb{d}}, \tilde{\pmb{u}})\right] \hspace{-1.5in}&& \\ \text{s.t.}~&y_j(s, \pmb{d}, \pmb{u}) \geq \sum\limits_{i\in[I]}x_{ij} - d_j && \forall (\pmb{d}, \pmb{u}) \in \mathcal{Z}_s, \forall s \in [S], \forall j \in [J] \\ &y_j(s, \pmb{d}, \pmb{u}) \geq 0 && \forall (\pmb{d}, \pmb{u}) \in \mathcal{Z}_s, \forall s \in [S], \forall j \in [J] \\ &\sum\limits_{j\in[J]}x_{ij} \leq q_i && \forall i \in [I] \\ &x_{ij} \geq 0 &&\forall i \in[I], \forall j \in [J], \\ \end{aligned}

where y j ( s , d , u ) y_j(s, \pmb{d}, \pmb{u}) is the adaptive decision representing the demand surplus and it adapts to each scenario and affinely depends on random variables d \pmb{d} and u \pmb{u} , as a tractable approximation of the non-anticipative decisions. Following the work done by Hao et al. (2020), model parameters are summarized as follows:

• Number of supply nodes I = 1 I=1 ;
• Number of demand nodes J = 10 J=10 ;
• Coefficients r ^ = ( 15.0 , 14.1 , 6.1 , 14.9 , 13.8 , 15.8 , 15.3 , 16.4 , 15.8 , 13.2 ) \hat{\pmb{r}} = (15.0,14.1,6.1,14.9,13.8,15.8,15.3,16.4,15.8,13.2) ;
• Revenue coefficients r j = 0.1 r ^ j + 3 r_j = 0.1\hat{r}_j + 3 , where j = 1 , 2 , . . . , J j=1, 2, ..., J ;
• Cost coefficients c j = 3 c_j = 3 , where j = 1 , 2 , . . . , J j=1, 2, ..., J ;
• Maximum supply of vehicles q i = 400 q_i=400 , where i = 1 , 2 , . . . , I i=1, 2, ..., I .

The ambiguity set F \mathcal{F} presented below considers the conditional means and variances of S S scenario,

F = { P ∈ P 0 ( R J × R J × [ S ] ) ∣ ( d ~ , u ~ , s ~ ) ∈ P E P [ d ~ ∣ s ~ = s ] = μ s ∀ s ∈ [ S ] E P [ u ~ ∣ s ~ = s ] = ϕ s ∀ s ∈ [ S ] P [ ( d ~ , u ~ ) ∈ Z s ∣ s ~ = s ] = 1 ∀ s ∈ [ S ] P [ s ~ = s ] = w s ∀ s ∈ [ S ] } . \begin{aligned} \mathcal{F} = \left\{ \mathbb{P}\in\mathcal{P}_0(\mathbb{R}^J\times\mathbb{R}^J\times [S]) \left| \begin{array}{ll} (\tilde{\pmb{d}}, \tilde{\pmb{u}}, \tilde{s}) \in \mathbb{P} & \\ \mathbb{E}_{\mathbb{P}}[\tilde{\pmb{d}}|\tilde{s}=s] = \pmb{\mu}_s & \forall s \in [S] \\ \mathbb{E}_{\mathbb{P}}[\tilde{\pmb{u}}|\tilde{s}=s] = \pmb{\phi}_s & \forall s \in [S] \\ \mathbb{P}[(\tilde{\pmb{d}}, \tilde{\pmb{u}}) \in \mathcal{Z}_s | \tilde{s}=s] = 1 & \forall s \in [S] \\ \mathbb{P}[\tilde{s}=s] = w_s & \forall s \in [S] \\ \end{array} \right. \right\}. \end{aligned}

The scenarios and parameters of the ambiguity sets are identified based on the dataset dataset taxi_rain.csv where the first ten columns are the taxi demand data for ten regions, and the remaining columns are corresponding side information in terms of rainfall records. Please note that the dataset is slightly different from the data used by Hao et al. (2020) as some small random noises are added to the demand data. We use a multivariate regression tree to generate S S scenarios (leaf nodes of the tree) and the conditional means and variances for each scenario are calculated respectively. The code is provided as follows.

import pandas as pd
from sklearn.tree import DecisionTreeRegressor

D, V = data.iloc[:, :10], data.iloc[:, 10:]         # D: demand & V: side info.

regr = DecisionTreeRegressor(max_leaf_nodes=4,      # max leaf nodes
min_samples_leaf=3)    # min sample size of each leaf
regr.fit(V, D)

mu, index, counts = np.unique(regr.predict(V), axis=0,
return_inverse=True,
return_counts=True)   # mu as the conditional mean

w = counts/V.shape                               # scenario weights
phi = np.array([(D.values[index==i] - mu[i]).std(axis=0)
for i in range(len(counts))])       # conditional variance
d_ub = np.array([D.values[index==i].max(axis=0)
for i in range(len(counts))])      # upper bound of each scenario
d_lb = np.array([D.values[index==i].min(axis=0)
for i in range(len(counts))])      # lower bound of each scenario

The structure of the tree is displayed by the following diagram, as an example of four leaf nodes where the minimum sample size for each node is three. By exploiting the moment information of each leaf node, we can formulate the model by the following code segment.

from rsome import dro
from rsome import square
from rsome import E
from rsome import grb_solver as grb
import numpy as np

J = 10
I = 1
rhat = np.array([15.0, 14.1, 6.1, 14.9, 13.8, 15.8, 15.3, 16.4, 15.8, 13.2])
b = 3 * np.ones(J)
r = 0.1*rhat + b
c = np.zeros((I, J)) + b
q = 400 * np.ones(I)

S = mu.shape                             # the number of leaf nodes (scenarios)
model = dro.Model(S)                        # create a model with S scenarios

d = model.rvar(J)
u = model.rvar(J)
fset = model.ambiguity()
for s in range(S):
fset[s].exptset(E(d) == mu[s],
E(u) <= phi[s])         # conditional expectation
fset[s].suppset(d >= d_lb[s], d <= d_ub[s],
square(d - mu[s]) <= u) # support of each scenario
pr = model.p                                # scenario weights
fset.probset(pr == w)                       # w as scenario weights

x = model.dvar((I, J))
y = model.dvar(J)
for s in range(S):

model.minsup(((c - r)*x).sum() + E(r@y), fset)
model.st(y >= x.sum(axis=0) - d, y >= 0)
model.st(x >= 0, x.sum(axis=0) <= q)

model.solve(grb)

Being solved by Gurobi...
Solution status: 2
Running time: 0.0591s

Reference

Hao, Zhaowei, Long He, Zhenyu Hu, and Jun Jiang. 2020. Robust vehicle pre‐allocation with uncertain covariates. Production and Operations Management 29(4) 955-972.

展开全文  python 线性规划 数学建模 cart分类回归树 数据分析
• 在这个案例中，我们用RSOME建立一个分布鲁棒优化模型来解决门诊预约排程问题。 RSOME: Robust Stochastic Optimization Made Easy

Distributionally Robust Medical Appointment

In this example, we consider a medical appointment scheduling problem described in Bertsimas et al. (2019), where N N patients arrive at their stipulated schedule and may have to wait in a queue to be served by a physician. The patients’ consultation times are uncertain and their arrival schedules are determined at the first stage, which can influence the waiting times of the patients and the overtime of the physician.

A distributionally robust optimization model presented below is used to minimize the worst-case expected total cost of patients waiting and physician overtime over a partial cross moment ambiguity set.

min ⁡   sup ⁡ P ∈ F   E P [ ∑ j = 1 N y i ( z ~ , u ~ ) + γ y N + 1 ( z ~ , u ~ ) ] s.t.  y j ( z , u ) − y j − 1 ( z , u ) + x j + 1 ≥ z j − 1 ∀ ( z , u ) ∈ Z ,   ∀ j ∈ { 2 , 3 , . . . , N + 1 } y ( z , u ) ≥ 0 , ∀ ( z , u ) ∈ Z x ≥ 0 ,   ∑ j = 1 N x j ≤ T \begin{aligned} \min~&\sup\limits_{\mathbb{P}\in\mathcal{F}}~\mathbb{E}_{\mathbb{P}}\left[\sum\limits_{j=1}^N y_i(\tilde{\pmb{z}}, \tilde{\pmb{u}}) + \gamma y_{N+1}(\tilde{\pmb{z}}, \tilde{\pmb{u}})\right] && \\ \text{s.t.}~&y_j(\pmb{z}, \pmb{u}) - y_{j-1}(\pmb{z}, \pmb{u}) + x_{j+1} \geq z_{j-1} && \forall (\pmb{z}, \pmb{u})\in \mathcal{Z}, ~\forall j \in \{2, 3, ..., N+1\} \\ &\pmb{y}(\pmb{z}, \pmb{u}) \geq \pmb{0}, && \forall (\pmb{z}, \pmb{u})\in \mathcal{Z} \\ &\pmb{x} \geq \pmb{0},~\sum\limits_{j=1}^N x_j \leq T && \\ \end{aligned}

with the random variable z ~ j \tilde{z}_j representing the uncertain consultation time of each patient. The first-stage decision x j x_j denotes the inter-interval time between patient j j to the adjacent patient j + 1 j+1 , for j ∈ [ N − 1 ] j\in[N-1] , and x N x_N is the arrival of the last patient and the scheduled completion time for the physician before overtime commences. The second-stage decision y j y_j denotes the waiting time of patient j j , with i ∈ [ N ] i \in [N] and y N + 1 y_{N+1} represents the overtime of the physician. In order to achieve a tractable formulation, it is approximated by the decision rule y ( z , u ) \pmb{y}(\pmb{z}, \pmb{u}) which affinely adapts to random variables z \pmb{z} and auxiliary variables u \pmb{u} .

In this numerical experiment, we consider the lifted form of the partial cross moment ambiguity set F \mathcal{F} given below.

F = { P ∈ P ( R N × R N + 1 ) ∣   ( z ~ , u ~ ) ∼ P z ≥ 0 ( z j − μ j ) 2 ≤ u j , ∀ j ∈ [ N ] ( 1 ⊤ ( z − μ ) ) 2 ≤ u N + 1 E P ( z ~ ) = μ E P ( u ~ j ) ≤ σ j 2 , ∀ j ∈ [ N ] E P ( u ~ N + 1 ) ≤ e ⊤ Σ e } . \begin{aligned} \mathcal{F} = \left\{ \mathbb{P}\in\mathcal{P}(\mathbb{R}^N\times\mathbb{R}^{N+1}) \left| \begin{array}{ll} ~(\tilde{\pmb{z}}, \tilde{\pmb{u}}) \sim \mathbb{P} \\ \pmb{z} \geq \pmb{0} & \\ (z_j - \mu_j)^2 \leq u_j, & \forall j \in [N] \\ (\pmb{1}^{\top}(\pmb{z} - \pmb{\mu}))^2 \leq u_{N+1} \\ \mathbb{E}_{\mathbb{P}}(\tilde{\pmb{z}}) = \pmb{\mu} & \\ \mathbb{E}_{\mathbb{P}}(\tilde{u}_j) \leq \sigma_j^2, & \forall j \in [N] \\ \mathbb{E}_{\mathbb{P}}(\tilde{u}_{N+1}) \leq \pmb{e}^{\top}\pmb{\Sigma}\pmb{e} \end{array} \right. \right\}. \end{aligned}

Values of model and ambiguity set parameters are specified as follows:

• Number of patients: N = 8 N=8 ,
• Unit cost of physician overtime: γ = 2 \gamma=2 ,
• Correlation parameter: α = 0.25 \alpha=0.25 ,
• Expected consultation times: μ \pmb{\mu} are random numbers uniformly distributed between [ 30 , 60 ] [30, 60] ,
• Standard deviations of the consultation times: σ j = μ j ϵ j \sigma_j=\mu_j\epsilon_j , where ϵ j \epsilon_j is a random number uniformly distributed between [ 0 , 0.3 ] [0, 0.3] ,
• The scheduled completion time for the physician before overtime commences:

T = ∑ j = 1 N μ j + 0.5 ∑ j = 1 N σ j 2 , T=\sum\limits_{j=1}^N\mu_j + 0.5\sqrt{\sum\limits_{j=1}^N\sigma_j^2},

• Covariance matrix: Σ \pmb{\Sigma} , with its elements to be:

( Σ ) i j = { α σ i σ j if  i ≠ j σ j 2 otherwise . \left(\pmb{\Sigma}\right)_{ij} = \begin{cases} \alpha \sigma_i\sigma_j & \text{if }i\not=j \\ \sigma_j^2 & \text{otherwise}. \end{cases}

The RSOME code for implementing this distributionally robust optimization model is given below.

from rsome import dro
from rsome import square
from rsome import E
from rsome import grb_solver as grb
import numpy as np
import numpy.random as rd

# model and ambiguity set parameters
N = 8
gamma = 2
alpha = 0.25
mu = 30 + 30*rd.rand(N)
sigma = mu * 0.3*rd.rand(1, N)
T = mu.sum() + 0.5*((sigma**2).sum())**0.5

mul = np.diag(np.ones(N))*(1-alpha) + np.ones((N, N))*alpha
Sigma = (sigma.T @ sigma) * mul

# modeling with RSOME
model = dro.Model()                                  # define a DRO model
z = model.rvar(N)                                    # random variable z
u = model.rvar(N+1)                                  # auxiliary variable u

fset = model.ambiguity()
fset.suppset(z >= 0, square(z - mu) <= u[:-1],
square((z-mu).sum()) <= u[-1])          # support of random variables
fset.exptset(E(z) == mu, E(u[:-1]) <= sigma**2,
E(u[-1]) <= Sigma.sum())                # support of expectations

x = model.dvar(N)                                    # the here-and-now decision
y = model.dvar(N+1)                                  # the wait-and-see decision

model.minsup(E(y[:-1].sum() + gamma*y[-1]), fset)    # worst-case expected cost
model.st(y[1:] - y[:-1] + x >= z)                    # constraints
model.st(y >= 0)                                     # constraints
model.st(x >= 0)                                     # constraints
model.st(x.sum() <= T)                               # constraints

model.solve(grb)                                     # solve the model by Gurobi

Being solved by Gurobi...
Solution status: 2
Running time: 0.0516s

Reference

Bertsimas, Dimitris, Melvyn Sim, and Meilin Zhang. 2019. Adaptive distributionally robust optimization. Management Science 65(2) 604-618.

展开全文  python 线性规划 数学建模
• 鲁棒优化（1）

千次阅读 2021-04-26 12:10:07
打算做一个鲁棒优化的系列，先从Bertsimas，2004的文章开始学习，最后的目标时做到分布鲁棒. 一、什么是鲁棒优化 简而言之，当你的模型存在不确定系数，而你需要免疫这些不确定系数，那你就需要考虑一下鲁棒优化. 二... 算法 线性代数
• 了解通透分布式鲁棒优化(DRO) 学习内容 分布式鲁棒优化 Q：为什么是从DRO开始？ A：因为任务是讲解一篇DRO的论文，如果可以我也想从线性规划开始TAT 学习时间 截止2021.7.27 学习产出 1、CSDN 技术博客每天一篇 2、...
• 希望可以帮做鲁棒优化相关的同行们省去手动求对偶和Robust Counterpart，然后吭哧吭哧编程的麻烦。本文将简单介绍XProg(内容主要来自Xprog的用户手册)。Julia语言里有为鲁棒优化开发的JuMPeR。个人使用体验是XProg更...
• 使用yalmip求解鲁棒优化前言鲁棒优化简介yalmip实操求解优化...由于随机变量的分布未知，在此考虑使用鲁棒优化RO进行资源分配，具有更好地鲁棒性，即对变化具有更好包容性。如果能够得到随机变量的均值和方差，使用...
• 论文题目：鲁棒优化算法研究及应用 溉一学科名称：计算数学研究生：黄姣茹 签名：指导教师：钱富才教授 签名：摘 要在很多实际问题中，数据的不确定性无处不在。例如，在供应链优化问题中，在需...
• 鲁棒主成分分析中图法分类号:TP391.4 文献标识码:A 文章编号:1006-8961论文引用格式：Cai N, Zhou Y. A survey: robust principal component analysis methods for moving object detection[J]. Journal of Image ...
• 鲁棒优化（二）

千次阅读 2021-09-12 09:47:51
鲁棒优化（一）中，我们知道鲁棒优化的求解困难点在于不确定集合的确立，由于不同的不确定集对结果影响十分明显，如果不确定集越精细，模型复杂度就将会越高，求解也就会变的愈加的困难。而当不确定集越宽泛时，所... 线性代数 算法
• 前言 前面我们已经介绍了，连续线性模型的鲁棒对等转换全部过程，本章内容分两部分. 1.将鲁棒优化与机会约束结合，从概率的角度，选取Γ的大小，并给出一个...概率由标准正态分布函数计算得出. 大概有0.35的概率不被违
• 具体地说是一种基于t分布和glmb跟踪框架的的多目标跟踪技术，可用于雷达监测、计算机视觉、交通监控、细胞生物学、传感器网络和分布式估计等系统中多目标的跟踪。背景技术：联合广义标签多伯努利(j-glmb)是近年来最... 随机生成元素升序向量
• 注意，样本数据集在每次运算时都是按标准正太分布随机生成的，结果有细微差异。由于复现时统计次数为100，仅为论文中的十分之一，有差异，但总体上符合预期。 此外，DRO KL列在论文复现时未求解。 算法 机器学习
• 其目的是提高DRL网络的鲁棒性和性能。GANC使用遗传算法（GA）通过产生增广输入来最大化DRL网络的神经元覆盖率（NC）。本文将此方法应用于自动驾驶汽车中，对于不同的道路跟踪视图，准确地提供正确的决策是至关重要的... 算法 人工智能 机器学习 深度学习 编程语言
• 尊重原创，附原文链接： Xing Wang, Niels Agatz , Alan Erera Published Online:16 Aug 2017https://doi.org/10.1287/trsc.2017.0768
• #热爆鲁棒优化的目的是寻找在给定的扰动邻域内，当扰动发生时，给定一个特定的鲁棒性度量，仍然保持最优的解。pyrobast是由exeter大学(jonathan e.fieldsend教授的团队)开发的各种健壮优化方法的python实现。核心...
• MMD：最大均值差异Wasserstein距离实验数据来源Amazon review benchmark dataset.The Amazon review dataset is one of the most widely used benchmarks for domain adaptation and sentiment analysis.... kl散度度量分布
• 之前，我们实现了手术分配的确定性优化和随机规划模型，但是在实践中，一个普遍的问题是许多医疗机构没有足够的数据来构建上文中用到的随机规划模型（要求已知服务时长的概率分布）。大多数情况下医疗机构仅仅能够... python 启发式算法 算法
• 浅谈KL散度

2020-12-21 06:07:35
一、第一种理解相对熵(relative entropy)又称为KL散度...KL散度是两个概率分布P和Q差别的非对称性的度量。KL散度是用来度量使用基于Q的编码来编码来自P的样本平均所需的额外的比特个数。 典型情况下，P表示数据的... kl散度度量分布
• 网络鲁棒性

2021-03-05 12:03:17
在观察现实的过程中，我们发现很多自然系统或者社会系统都表现出一种出色的能力：即使它们的一些组成部分失效了，它们仍然能够维持基本功能。...● 社会科学家和经济学家都很关心鲁棒性，因为它关系到人类社会和组织 大数据
• 模型的鲁棒性和泛化性 鲁棒性：对于输入扰动或对抗样本的性能。 加入小扰动，或进行数据增强。对于我们正常使用的模型，或者小数据集，需要进行数据增强，增强模型的鲁棒性，并且可以提升模型泛化能力，即在测试集... 机器学习 深度学习 人工智能
• 定义视觉向量v1和v2两帧之间的相似性得分为 由于BoW匹配方法通过局部特征的分布来聚合局部特征，而忽略其空间关系，因此可能会出现错误匹配。在系统中，基于全局描述子的度量可以作为解决这个问题的补充标准。在LCD... 算法 大数据 编程语言 计算机视觉 机器学习
• [论文]鲁棒的对抗性强化学习摘要1.简介1.1RARL综述2.背景2.1 MDPs中的标准强化学习2.2 两人零和折扣游戏3.鲁棒的对抗式RL3.1 对抗智能体的鲁棒控制3.2 提出方法:RARL结论 摘要 深度神经网络与快速模拟和改进的计算相... 人工智能 强化学习
• 特征去噪以提高对抗鲁棒性

千次阅读 2021-01-14 16:31:03
《Feature Denoising for Improving Adversarial Robustness》论文翻译 摘要 对图像分类系统的对抗性攻击给卷积...当与对抗训练相结合时，我们的特征去噪网络在白盒和黑盒攻击设置中显著提高了对抗鲁棒性的最先进水平
• 为克服这些挑战，当前研究提出一个新颖的模型RUC，受鲁棒学习的启发。RUC的新颖性是在用现有图像聚类方法的伪标签时作为一个可能包含错分类样本的有噪数据集。它的重训练过程可以修改错误对齐的知识，并减轻预测中的... python 深度学习
• 模型介绍 假定 是 维向量数据点 和相应标签 上的分布， 为待优化模型参数， 为损失函数， 为对抗扰动集合， 为对数据点施加扰动 的函数， 为随机扰动向量。 对于通用对抗扰动，对抗扰动的集合表示为: 对于通用Patch... 计算机视觉 机器学习 深度学习 人工智能 神经网络
• 在0x05将会介绍模型的鲁棒性及相关研究，在0x06部分从模型、数据、承载系统三方面展开深度学习自身安全性问题的阐述。 0x01 在0x00中我们谈到了人工智能、模式识别、机器学习、深度学习，这四个领域其实都是相互联系...
• 这篇文章还展示了正态分布变换（Normal distributions transform, NDT）与相关熵的紧密相似性。为了简化优化过程，我们还提出了一种在特殊欧式群上的流形上的参数化方法。一系列实验表明了CoBigICP比目前最先进的... 算法 人工智能 计算机视觉 机器学习
• 之前有用来做神经网络的解释性分析，虽然似乎被怼了，但是可以用来做其他事情啊，比如这篇文章就是说的用信息瓶颈来增强模型的鲁棒性。这篇文章讲的是一种在图上做信息瓶颈压缩结构与节点信息，旨在希望能滤除不重要... 算法 神经网络 人工智能 深度学习 机器学习  ...