Necessary Optimality Conditions in a Problem with Integral Equations on a Nonfixed Time Interval Subject to Mixed and State Constraints

. We consider an optimal control problem with Volterra-type integral equations on a nonﬁxed time interval subject to endpoint constraints, mixed state-control constraints of equality and inequality type, and pure state inequality constraints. The main assumption is the linear– positive independence of the gradients of active mixed constraints with respect to the control. We formulate ﬁrst order necessary optimality conditions for an extended weak minimum, the notion of which is a natural generalization of the notion of weak minimum with account of variations of the time. The presented conditions generalize the local maximum principle in optimal control problems with ordinary diﬀerential equations.


Necessary optimality conditions in a problem
with integral equations on a nonfixed time interval subject to mixed and state constraints 1

Introduction
The results presented in this paper generalize the results obtained in our previous two papers [6] and [7].Paper [6] was devoted to the first order necessary conditions for a weak minimum in a general optimal control problem with Volterratype integral equations, considered on a fixed time interval, subject to endpoint constraints of equality and inequality type, mixed state-control constraints of inequality and equality type, and pure state constraints of inequality type.Paper [7] studied first order necessary conditions for an extended weak minimum in an optimal control problem with Volterra-type integral equations considered on a non-fixed time interval, subject to endpoint constraints of equality and inequality type, but without mixed state-control constraints and pure state constraints.
Here we consider a problem generalizing both problems of [6] and [7].We formulate first order necessary conditions for an extended weak minimum in this general problem.Following the tradition, we call them stationarity conditions, or conditions of the local maximum principle .They are presented in Theorem 1.
As far as we know, such conditions for problems with integral equations on a variable time interval were not obtained up to now.Their novelty, as compared with those for problems on a fixed time interval is that the costate equation and transversality condition with respect to t involve nonstandard terms that are absent in problems with ODEs.More remarks concerning the existing literature on the problems with integral equations can be found in papers [1-4, 6, 7].
As was already mentioned in [6], the stationarity conditions in optimal control problems constitute an important stage in obtaining any further necessary optimality condition, including maximum principle or higher order conditions, and thus, they deserve a separate thorough study for each specific class of problems.
The paper is organized as follows.In Section 2 we formulate a general optimal control problem with integral equations on a variable time interval which we call Problem A. We also define in this section the notion of the extended weak minimum.Section 3 is devoted to formulation of the main result of the paper -the local maximum principle in Problem A, which is the first order necessary condition for an extended weak minimum (Theorem 1).A short discussions of its proof is given in Section 4.

General optimal control problem with integral equations on a variable time interval (Problem A)
Consider the following control system of Volterra-type integral equations on a variable time interval [t 0 , t 1 ] : where x(•) is a continuous n− dimensional and u(•) is a measurable essentially bounded r− dimensional vector-functions on [t 0 , t 1 ].As usual, we call x(•) the state variable and u(•) the control variable (or simply the control ).A pair w(t) = (x(t), u(t)) defined on its own interval [t 0 , t 1 ] and satisfying (1) for a.e.t ∈ [t 0 , t 1 ] is called a process.We assume that the function f is defined and twice continuously differentiable on an open set R ⊂ R 2+n+r .
The problem is to minimize the endpoint functional on the set of all processes (solutions of system (1)) satisfying the endpoint constraints the mixed state-control constraints and the state constraints The functions ϕ 0 , ϕ i , η j are assumed to be defined and continuously differentiable on an open set P ⊂ R 2n+2 , and the functions F i , G j , and Φ k are assumed to be defined and continuously differentiable on an open set Q ⊂ R 1+n+r (the smoothness assumptions).The notation d(F ), d(G), and d(Φ) stand for the numbers of these functions.Moreover, we assume that the mixed constraints ( 5) and ( 6) are regular in the following sense: at any point (t, x, u) ∈ Q satisfying relations F i 0 ∀ i and G j = 0 ∀ j, the system of vectors is positively-linearly independent, where I(t, x, u) = {i | F i (t, x, u) = 0 } is the set of active indices of mixed inequality constraints at the given point.Here and in the sequel we denote by F iu the partial derivative (gradient) of the function F i with respect to the variable u.Similar notation is used for other functions an variables.Recall that a system consisting of two tuples of vectors p 1 , . . ., p m and q 1 , . . .q k in the space R r is said to be positively-linearly independent if there does not exist a nontrivial tuple of multipliers α 1 , . . ., α m , β 1 , . . .β k with all α i 0 such that The problem (1)-( 7) will be called Problem A, and the relations (2)-( 4) its endpoint block.
Note that the function f explicitly depends on two time variables, t and s, the roles of which are essentially different.Conventionally, the variable s will be called inner, while t will be called outer time variable, and one should carefully distinguish between them in further considerations.Among the four arguments of the function f and its derivatives, the first argument will always be the outer and the second one be the inner time variables, no matter by which letters they will be denoted.
As in [6] and [7], we mention an important particular case of system (1): if f does not depend on the outer time variable t, i.e., f = f (s, x(s), u(s)), then the integral equation ( 1) is equivalent to the differential equation ẋ(t) = f (t, x(t), u(t)), hence Problem A becomes an optimal control problem of ordinary differential equations on a nonfixed time interval.
Obviously, each process under consideration must "lie" in the domain R of the function f (t, s, x, u), i.e.
Definition.A process w(t) = (x(t), u(t)) defined on an interval t ∈ [t 0 , t 1 ] (with continuous x(t) and measurable and essentially bounded u(t) ) will be called admissible with respect to R if its "extended graph" A process is called admissible in problem A if it is admissible with respect to R and satisfies all the constraints (1) and ( 3)-( 7) of the problem.
Like in any problem on a nonfixed time interval, the notion of weak minimum in Problem A needs a modification.
Definition.We will say that an admissible process w 0 (t) = (x 0 (t), u 0 (t)), t ∈ [ t0 , t1 ], provides the extended weak minimum if there exists an ε > 0 such that for any Lipschitz continuous bijective mapping ρ : [t 0 , t 1 ] → [ t0 , t1 ] satisfying the conditions |ρ(t) − t| < ε and | ρ(t) − 1| < ε, and for any admissible process w(t) = (x(t), u(t)), t ∈ [t 0 , t 1 ], satisfying the conditions we have J(w) J(w 0 ).(Notation (∀ ), as usual, means "for almost all".) The conditions on ρ imply ρ(t 0 ) = t0 and ρ(t is fixed and we take ρ(t) = t, then relations (8) describe the usual uniform closeness between the processes w 0 and w both in the state and control variables.However, for an arbitrary ρ(t), relations (8) extend the set of "competing" processes, and thus, even for a fixed time interval, the extended weak minimum is stronger than the usual weak minimum.
Note that here ψ x and ψ t are not the partial derivatives with respect to x and t, but simply the adjoint variables, which refer to x and t, respectively.This notation was proposed by Dubovitskii and Milyutin and turned out to be highly convenient, especially in problems with many state variables.We hope it will not cause confusion.
We denote by dψ x , dψ t , dµ k the Lebesgue-Stieltjes measures which correspond to the functions of bounded variation ψ x , ψ t , µ k , respectively.These measures have no atoms at the points t0 and t1 , and moreover, dµ k 0, k = 1, . . ., d(Φ), since it corresponds to the inequality constraint.Hence each µ k is a monotone nondecreasing function.By we denote the generalized derivatives of these functions with respect to t.Consequently, the following relations hold: In what follows, all pointwise relations involving continuous functions hold for all t, while those involving measurable functions hold for almost all t.
In order to present optimality conditions in Problem A, introduce, for a tuple λ of (11), the modified Pontryagin function Here ψ x (t−) means the left hand value of the function ψ x at a point t, and f t means the partial derivative of the function f (t, s, x, u) with respect to the first, outer variable t.
Also, for w 0 and λ, let us introduce the augmented modified Pontryagin function H(t, s, x, u) = H(t, s, x, u) the endpoint Lagrange function β j η j (t 0 , x 0 , t 1 , x 1 ), and a special auxiliary function The main result of the paper is the following Theorem 1 (local maximum principle).If a process w 0 (t) = (x 0 (t), u 0 (t)), t ∈ [ t0 , t1 ] provides the extended weak minimum in Problem A and satisfies assumption (10), then there exists a tuple of multipliers (11) satisfying the specified above properties and such that the following conditions hold true: a) nonnegativity conditions c) endpoint complementary slackness conditions α i ϕ i ( t0 , x 0 ( t0 ), t1 , x 0 ( t1 )) = 0, i = 1, . . ., ν, d) pointwise complementary slackness conditions where f s is the partial derivative of the function f (t, s, x, u) with respect to the second, inner variable s, and f ts is its second partial derivative, g) transversality conditions in x, i) stationarity condition with respect to the control i.e., The last condition is called in such a way, since together with (14) it gives the equation for evolution of the function H(t, t, x 0 (t), u 0 (t)), which is often (especially in mechanical problems) regarded as the total energy of the system: If the state and mixed constraints are absent and the dynamics does not explicitly depend on time: f = f (x, u), then H = H, R = 0, and we get "the energy conservation law": H = const along the optimal process.Note that both the adjoint equation in t and the right transversality condition in t involve additional terms, dψ x (t)R(t) and ψ x ( t1 )R( t1 ), respectively.Both these terms are generated by the dependence of f (t, s, x, u) on the outer time variable t, which was absent in problems with ODEs.(Indeed, in those problems f t = 0, whence F = 0, so this term disappears.)In our opinion, this novelty in optimality conditions for problems on a variable time interval needs further study.
Using generalized derivatives of functions of bounded variation, we can represent the adjoint equation in x and t in the easy-to-remember form: and 4 About the Proof of Theorem 1 Like in our paper [7], in order to prove Theorem 1, we reduce Problem A to an auxiliary problem on a fixed time interval by using the change of time variable t = t(τ ), where dt/dτ = v(τ ) and v(τ ) > 0. Setting x(τ ) = x(t(τ )) and ũ(τ ) = u(t(τ )), we come to the following system of integral equations: where τ is a new time, t(τ ) an additional state variable, v(τ ) an additional control variable, and σ a new time of integration instead of s.
We see that here the integrand of the first equation involves the value t(τ ) of a state variable, which was not allowed in (1).Abstracting from the specific form of the second equation and changing the notation t(τ ) to a more general y(τ ) (and changing also τ to a more convenient t ), we come to a system of the following form on a fixed interval [t 0 , t 1 ] : This system does not fall into the framework of equation ( 1), since the integrand of the first equation depends on y(t) (which can be regarded as the outer state variable).Thus, we have to study a new, broader than (1), class of integral control systems.
Adding to the obtained system the mixed constraints, the state constraints and the terminal block, we obtain the following
Like before, the functions η j , ϕ i , and ϕ 0 are assumed to be continuously differentiable on an open set P ⊂ R 2n+2m , the functions F i , G j , and Φ k continuously differentiable on an open set Q ⊂ R 1+m+n+r .We also assume that the mixed constraints ( 22)-( 23) are regular in the same sense as in Problem A.
To derive optimality conditions in Problem B, we consider it as a particular case of an abstract nonsmooth problem in a Banach space, hence we can apply the well known abstract Lagrange multipliers rule for nonsmooth problems (see, e.g.[5,6]).Let us formulate it.We study the local minimality of an admissible point x 0 ∈ D. Assume that the cost f 0 and the mappings b i are Frechet differentiable at x 0 , the operator g is strictly differentiable at x 0 , and the image of g 0 ) is closed.Let K 0 i be the polar cone to K i , i = 1, . . ., ν. is stationary at x 0 : L (x 0 ) = 0.
Applying Theorem 2 to Problem B, we perform some analysis of the obtained conditions and represent them in the form of local maximum principle for Problem B. The latter is then applied to the auxiliary problem with system (18)-( 19), and finally, we rewrite the results in terms of the original Problem A.

Theorem 2 .
Let x 0 provide a local minimum in problem (28).Then there exist Lagrange multipliers α 0 0, ζ * i ∈ K 0 i , i = 1, . . ., ν, and y * ∈ Y * , not all equal to zero, satisfying the complementary slackness conditions ζ * i , b i (x 0 ) = 0, i = 1, . . ., ν, and such that the Lagrange functionL(x) = α 0 f 0 (x) + ν i=1 ζ * i , b i (x) + y * , g(x) and y(t) are continuous functions of dimensions n and m respectively, u(t) is a measurable and essentially bounded function on [t 0 , t 1 ].We still denote the time by t.The data functions g and h, as before, are assumed to be twice continuously differentiable on an open set R ⊂ R 2+2m+n+r .