Questions tagged [stochastic-dual-dynamic-programming]
The stochastic-dual-dynamic-programming tag has no summary.
13 questions
-1
votes
1
answer
76
views
Using CPLEX in the Julia environment of Google Colaboratory
I have a multi-stage mixed-integer stochastic global supply chain optimization problem formulation and want to solve its stage-wise problem instances using the CPLEX solver within the SDDiP algorithm ...
0
votes
1
answer
127
views
What do we call the situation when SDDP can solve non-recoursable problem due to feasible actions that lead to later infeasibility can not be optimal?
More comprehensively, what is the technical term of the situation in which the SDDP (stochastic dual dynamic programming) algorithm can solve a non-recoursable problem because the feasible actions ...
0
votes
0
answers
30
views
Entropic Value-at-Risk in stochastic optimization
I am looking for an illustrative example that uses "Entropic Value-at-Risk" in a stochastic optimization.
0
votes
1
answer
73
views
How to solve the infeasibility problem I receive in a very simple SDDP.jl model
I am experimenting with SDDP.jl to develop a multistage stochastic capacity expansion model for district heating. I have started working with a very simple model but receiving an infeasibility problem ...
0
votes
1
answer
77
views
Are quadratic multi-stage optimization problems with quadratic constraints solvable by stochastic dual dynamic programming and SDDP.jl?
I have a program with quadratic objective and constraints, and wanna utilize sddp.jl package in julia for solution.
0
votes
1
answer
88
views
How can I handle a constraint on a terminal state variable while avoiding infeasibility during SDDP.jl training process?
I want to solve a multi-stage optimization problem using SDDP.jl in which I am having hard time using constraints on state variables at the termination.
0
votes
1
answer
57
views
At the start of the execution of SDDP.jl, when there is no value approximation & terminal states available, what cost-to-go estimation is used?
I am using SDDP.jl for my research and want to know that at the very start of the execution, when there is no value function initialization and terminal state (for backward recursion), how cost-to-go ...
0
votes
1
answer
45
views
How do I convert an SDDP.State object into a number or tuple in SDDP.jl?
I want to collect evaluation data and for it I need to collect data in numbers not SDDP.state objects, so how can I do that in SDDP.jl.
2
votes
1
answer
96
views
Can I use continuous probability distributions when creating an SDDP.jl model?
I am using SDDP.jl for my research project and want to use continuous distribution, can I do so?
0
votes
1
answer
66
views
How can I write the output stream of SDDP.jl into an excel file?
I am using SDDP.jl for my research project in which I am developing a state-of-the-art actor critic algorithm which I am going to benchmark with SDDP but for it I need to plot graphs which require ...
1
vote
1
answer
241
views
Is stochastic dual dynamic programming (SDDP) a deterministic solution algorithm or does it have a stochastic component to it?
I am currently working on a paper in which I am statistically comparing dynamic optimization algorithms like SDDP, Actor-Critics etc.
In this regard, should I be running SDDP algorithm for my ...
1
vote
1
answer
220
views
Can stochastic dual dynamic programming algorithm (or any variant of it) handle multi-stage optimization problems with here-and-now uncertainty nodes?
Stochastic dual dynamic programming (SDDP) algorithm solves stage-wise optimization problem through sampling scenarios. In this regard, it is obvious to see that wait-and-see uncertainty can be easily ...
2
votes
1
answer
182
views
Does SDDP converge to the deterministic equivalent objective when the underlying scenario tree is the same?
I am implementing a Stochastic Dual Dynamic Programming (SDDP) algorithm for multi-stage linear stochastic program. To validate the solution by SDDP, I also implemented an extensive form (...