Adil Bagirov (University of Ballarat)
Nonconvex piecewise linear optimization: optimality conditions and numerical methods
In this talk, we present optimality conditions and numerical algorithms for unconstrained local and global minimization of continuous piecewise linear functions. First, we discuss necessary and sufficient local and global optimality conditions for such problems. These conditions are formulated using the concept of codifferential. Since any continuous piecewise linear function can be represented as a difference of polyhedral (DP) functions we use this representation to demonstrate that in many practical situations these conditions can be efficiently checked. Then we present algorithms for local and global minimization of DP functions. We prove that the proposed algorithms are finite convergent. Examples are presented to demonstrate the performance of the global search algorithm.
Jonathan Borwein (University of Newcastle)
On difference convex functions
Advances over the past fifteen years have led to a rich current theory of difference-convex functions. I shall describe the state of our knowledge and highlight some open questions. This is joint work with Miroslav Bacak. The talk is available here.
Roberto Cominetti (Universidad de Chile)
Short-term revenue management: optimal targeting of customers for a last minute offer
We discuss a short-term revenue optimization problem that involves the optimal targeting of customers for a promotional sale in which a finite number of perishable items are offered on a last-minute offer. The goal is to select the subset of customers to whom the offer will be made available, maximizing the expected return.
Each client replies with a certain probability and reports a specific value that depends on the customer type, so that the selected subset has to balance the risk of not selling all the items with the risk of assigning an item to a low value customer.
Selecting all those clients with values above a certain optimal threshold may fail to achieve the maximal revenue. However, using a linear programming relaxation, we prove that such threshold strategies attain a constant factor of the optimal value. The achieved factor is 1/2 when a single item is to be sold, and approaches 1 as the number of available items grows to infinity. Furthermore, for the single item case, we propose an upper bound based on an exponential size linear program that allows us to get a threshold strategy achieving at least 2/3 of the optimal revenue. Computational experiments with random instances show a significantly better performance than the theoretical predictions.
Andrew Eberhard (Royal Melbourne Institute of Technology)
Optimality conditions in nonsmooth analysis via approximation
In this talk we consider to what extent one can deriving first-order and second-order necessary (and sufficient optimality) optimality conditions for a general class of constrained optimization problems using smoothing regularization procedures based on infimal-like convolutions/envelopes. In this way one may take the departure point for this study as being very standard classical stationarity concepts. The fact that we are able to derive many more complex concept and interrelated these via approximation procedures strongly suggests that the concepts arrive at are indeed quite natural.
Vladimir Gaitsgory (University of South Australia)
Use of semi-infinite linear programming for constructing an approximate subsolution of Hamilton-Jacobi-Bellman equation and finding near optimal controls
An optimal control problem can be restated as a problem of optimisation over the set of occupational measures that are generated by the admissible control-trajectories pairs. Under non-restrictive assumptions, the latter proves to be equivalent to an infinite dimensional (ID) linear programming (LP) problem. An optimal solution of the problem dual to this IDLP problem coincides with a smooth viscosity subsolution of the Hamilton-Jacobi-Bellman (HJB) equation for the original optimal control problem, and (if exists) it allows one to construct the optimal control. It, however, may not exist, and we propose to approximate the IDLP problem by a sequence of semi-infinite (SI) LP problems. Optimal solutions of the problems dual to these SILP problems exist under natural conditions. They can be interpreted as approximate subsolutions of the HJB equation and can be used for finding near optimal controls.
Alex Kruger (University of Ballarat)
Stability of error bounds for convex constrained systems
In this talk, I am going to show that certain known sufficient conditions for local and global error bounds in Banach spaces actually ensure error bounds for a family of convex functions being in a sense small perturbations of the given one. Subdifferential slopes are used for formulating error bound criteria.
Helmut Maurer (University of Muenster)
Optimal controls minimizing tumor volume for a mathematical model of combination therapy
We consider the problem of minimizing the tumor volume with given amounts of anti-angiogenic and cytotoxic agents. For one underlying dynamical model of tumor growth optimal solutions are given for two versions of the problem: the monotherapy case, when only anti-angiogenic agents are administered, and the combination therapy case, when a cytotoxic agent is added as a second control variable. The optimal anti-angiogenic control in the monotherapy case typically has structure bang-singular-bang. The singular control can be computed in feedback form which allows to prove optimality by synthesis analysis. Numerical solutions in the combination therapy case show that the anti-angiogenic control follows the bang-singular-bang pattern of the control in the monotherapy case, while the cytotoxic control is bang-bang with a maximum dose at the end of the therapy. The numerical approach is based on techniques of direct switching time optimization.
This is joint work with Urszula Ledzewicz and Heinz Schaettler.
Hans Josef Pesch (University of Bayreuth)
The maximum principle of optimal control: a history of ingenious ideas and missed opportunities
After more than 50 years of optimal control theory, it is time to look back on the early days of optimal control, in particular on the development of the maximum principle during the early times of the Cold War, when mathematicians in the US and the USSR made every endeavor to solve minimum time interception problems which later on became prototypes of the first optimal control problems.
In the talk a short survey on the history of optimal control is given, which turns out to be a sequence of ingenious ideas and regrets of missed opportunities.
The conclusions are mainly extracted from the Michael Plail's monograph on the development of optimal control theory from its commencements until it became an independent discipline in mathematics.
This is joint work with Michael Plail.
Kok Lay Teo (Curtin University)
An exact penalty function method for continuous inequality constrained optimal control problems
In this paper, a computational approach based on exact penalty function method is devised for solving a class of terminal equality and continuous inequality constrained optimal control problems. By using the parameterization technique and a time scaling transformation, the problem is approximated to a class of terminal equality and continuous inequality constrained optimisation problem. A new exact penalty function method is used to solve this optimization problem. The main idea is to augment the exact penalty functions constructed from the terminal equality constraints and continuous inequality constraints to the objective function, forming a new objective function. This gives rise to a sequence of unconstrained optimization problems. It is shown that for sufficiently large value of the penalty parameter any local minimizer of the unconstrained optimization problem is a local minimizer of the original problem. For illustration, three examples are solved using the proposed method. Numerical results indicates that the method proposed is effective.