MULTIOBJECTIVE APPROACH IN THE TREATMENT OF CANCER

. In this work we deal with a cancer problem involving the growth of tumor cells and their interaction with eﬀector cells. The goal is to ﬁnd an optimal control minimizing tumor cells density together with the amount of chemotherapy drugs and maximizing the density of eﬀector cells. By invoking the multi-objective optimization we characterize optimal Pareto solutions and give simulation of Pareto front.


Introduction
Mathematical modeling of biological phenomena has a long history, where several studies have been established.Notably, cancer and its evolution, which has a great interest of mathematical researchers [3,7,10,15,17,19,20], in the purpose of understanding the dynamics of tumor cells and their interaction with other types of cells, and by analyzing mathematical models to obtain results that can help medical staff to take a decision.For example, Depillis et al. [5] analyze a dynamic of tumor-immune interactions with chemotherapy and applied the theory of control on the coupled system of ordinary differential equations to characterize the optimal controls related to drug therapy.In the same context, Ledzewicz et al. [14] analyze a mathematical model for anti-angiogenic treatments as an optimal control problem to know how to schedule a given amount of angiogenesis inhibitors in order to reduce, as much as possible, the volume of the tumor.
In a previous work [18], we gave, by means of Pontryagin maximum principle, a characterization of a quadratic optimal control problem subject a dynamical system [12], describing the interaction between cancer cells and effector cells.
The present work differs from [18], since it provides two features.Firstly, we investigate the optimal control cancer problem, in which the control models the amount of chemotherapy drug, with a linear control objective function, which is more realistic and present more challenges [5,13,14].Secondly, the objective function embeds three functions, the first one deals with the maximization of cancer cells density killed by the treatment, the second one maintains acceptable toxicity to normal tissues by minimizing the total amount of chemotherapy drugs and the third objective function maximizes the density of effector cells.We then obtain a multi-objective optimization problem including three criteria simultaneously.Since a classical solution of a multi-criteria problem often not occurs, the multi-objective approach has for interest to find a kind of solutions giving a compromise between the objective functions, that are, Pareto solution.Our goal, besides characterizing Pareto solutions, consists to determine, among them, the most interesting one for our problem.
Consequently, the main emphasis is on a characterization of bang-bang, singular commands, and corresponding trajectories.Although bang-bang controls are natural candidates for optimality and are widely used in a medical treatment where a maximum dose of chemotherapy is given repeatedly with breaks in between.In this analysis, we seek a compromise between the density of tumor cells, the dose of treatment and the density of effector cells that minimizes the objective function studied.The results of this strategy can allow us to determine treatment protocols for a cancer patient.
The present work is divided into four sections.The next section is dedicated to the formulation of objective functions and the use of the scalarization method.In Section 3, we characterize the optimal control using Pontryagin maximum principle as well as the dynamic of population cells.Eventually, we illustrate the theoretical results obtained in Section 4, by numerical simulations, where specific parameters are related to lymphoma cancer.

Cell's dynamical model
In this work we are based on the following mathematical model proposed by Kuznetsov [12]: This model differs from most others because it takes into account the infiltration of the tumor by effector cells as well as the possibility of effector cell inactivation.Even if the model is simple, it describes well the dynamics of the cells and the interaction between the tumor cells and the cells of the immune system.Our contribution in this work consists first of all on adding the treatment in this model.We choose chemotherapy that is effective against many types of cancer.The goal of chemotherapy is to attack the growth factors of cancer cells and stop their proliferation.Chemotherapy also attacks healthy proliferating cells including progenitor cells which produce effector cells.The immune system is therefore also affected.We then obtain the following controlled dynamic system where the control concerns chemotherapy treatment: (2.1) The control variable u denotes the concentration of chemotherapy.The variables µ and h correspond respectively to the death rates of tumor cells and effector cells by chemotherapy.We list all of the parameters used in the Kuznetsov's model, their meaning and their units in Table 1.
Lets rewrite this controlled dynamic system as: x(0) = x 0 .
Where x = (T, E) ∈ W 1 ([0, t f ], R 2 ) the space of absolutely continuous functions, u ∈ U := {u : [0, t f ] −→ U, measurable}, U = [0, u max ], t f is the final time of chemotherapy treatment and x 0 = (T 0 , E 0 ).Moreover, f Fractional effector cell kill by chemotherapy day −1 0.6 [5] is defined as : One can ask whether the dynamical system (2.2) admits a unique solution x(t) associated to a control u ∈ U .
The answer is affirmative since the function f satisfies the standards hypotheses for the existence of a unique solution.Indeed, as shown in [18], the following assumptions are fulfilled by f : -There exists constants C 1 > 0 and C 2 > 0 such that, for all (x, x , u) (2.4)

Multiobjective optimal control
We now define three-objective functions J 1 , J 2 and J 3 which respectively correspond to the density of tumor cells, the amount of treatment doses (chemotherapy u) and the density of effector cells.These functions are linear with respect to the control variable and associated with the system (2.2) on the interval [0, and the third objective function is defined by Hence, we deal with a multiobjective optimal control problem inf subject to the dynamic (2.2).Contrary to single objective optimization, in multiobjective optimization, one minimizer that minimizes all objective functions, fails to exist generally.The natural way to avoid this drawback is to search for the co-called "Pareto optimum" and "Pareto front".
A Pareto optimum is a state for which we can't make one objective better off without making the other objective worse off.In most cases, there exist several Pareto optimums.Rigorously speaking we set The set of all Pareto optimum of the problem (2.2)-(2.5) is noted P(0, x 0 ).

And we define the Pareto front as
Definition 2.2.The set of all objective functional values at the Pareto optimums is called the Pareto front, that is, The main object of the works dealing with multiobjective optimization is to determine the Pareto front or some points from it, see [1,6,11].
Let us note that according to ([6], Thm. 1, p. 82), we know that if the function J is convex then the Parero front is convex and hence the scalarization method is useful to determine the Pareto front, see also [8,16].
Lemma 2.3.The objective vectorial function J is convex.
Proof.The proof is immediate since J 1 , J 2 and J 3 are linear.
So, the Pareto front of the three-objective control problem (2.2)-(2.5)can be determined by minimizing the scalarized objective function: Since the density of tumor cells, the density of effector cells and the dose of chemotherapy treatment do not have the same order of magnitude, we need to introduce ε 1 and ε 2 which are the scaling factors.This allows us to display more clearly the dynamics of the system (2.1) with one objective function (2.6).
The weight parameter w i can be chosen to model preferences between the objective costs.Indeed, if the reduction of tumor cells concentration is privileged in treatment protocol then w 1 must be greater than w 2 and w 3 , conversely if the minimization of the amount of drugs is opted to keep the healthy of patient good then w 2 and w 3 must be greater than w 1 .
The problem consists to minimize the objective function J w , which allows to minimizing the tumor cells and the total amount of drug given for a time interval [0, t f ].These considerations lead to the following optimal control problem: (2.7) For each choice of w i the solution of this problem is a Pareto optimum of the three-objective problem (2.2)-(2.5)and by varying w i we can determine the Pareto front, see for instance [6,8,11,16].
The problem (2.7) is affine in the control variable u.Generally, quadratic control costs lead to continuous control functions that are difficult to administer in practice.For this reason, we are interested to costs with linear control.The linear case gives us concatenations of bang-bang and singular controls.The piecewise constant bangbang commands are easy to administer.Thus, in many cases even singular controls can be closely approximated by piecewise constant controls.This control structure will be proved by our computations in the next section.

Characterization of optimal control
To prove the existence of an optimal control, we use arguments based on a result of [9].Proof.According to (2.3), (2.4) and [18], we prove that for each control u ∈ U there exists a unique solution (T, E) of the system (2.1) defined on [0, t f ].Now, to prove that the optimal control problem admit a solution of (2.7), it suffices to prove, according to [9], that the following conditions hold: A1-The dynamic field f is continuous, and Lipschitz in its first argument x uniformly bounded with respect to the second u: The continuity of f is obvious, the Lipschitz property is proved by (2.4) and the boundedness derives from (2.3) by taking the supremum over u on U and Lipschitz in its first argument x uniformly bounded with respect to the second u: These properties come from the linearity of the L with respect all arguments.Furthermore we get A3-The function f (x, u) := [L(x, u); f (x, u)] is such that f (x, U ) is convex for all x: This fact derives from the linearity of f and L with respect to argument u.
Hence we can conclude that the problem (2.7) admit an optimal control as required.
We develop now the characterization of this optimal control by the Pontryagin maximum principle.Consider again the control system (2.2) and The Hamiltonian of this system is given by: where ., . is the scalar product of So, the optimal control is given by: where the switching function is defined by: Proof.The Pontryagin maximum principle asserts that where H is given by By substituting H in (3.2), (3.3) and (3.4) with its expression we get Taking account on the linearity of H with respect to u , the switching function is given by In this switch function, we notice that there is no explicit dependence on the control u * , then there is possibility of singular arcs.So, the optimal control is given by: By replacing derivatives with their expressions, we get From Φ = 0 and Φ = 0, we can obtain λ 1 and λ 2 in terms of the state variables.
Firstly, we have λ 1 is given by Replacing λ 2 by its expression, we obtain hence λ 1 according to the states T and E After deriving a second time Φ, we obtain with Moreover, a further necessary optimality condition is the generalized Legendre Cebsch condition: where q is the order of the singularity of the control u.Thus, for our problem, the strict Legendre-Clebsch condition is satisfied if Under this condition, the equation Φ(t) = 0 gives a formula for the singular control in terms of the state variables T and E. Then we get a feedback control defined as follows:

Numerical simulation
After the characterization of the optimal control u, we make numerical simulations to illustrate the theoretical results obtained in the previous section.To do this, we used Matlab software.We are interested by the indirect method which consists in numerically solving, by a shooting method, of a problem of limit values obtained by applying the principle of maximum.The application of Pontryagin's Maximum Principle allowed us to express the extremal control as a function of the states (T, E) and the adjoint states (λ 1 , λ 2 ).The extremal system then becomes a differential system of the following form: where Z(t) = (T (t), E(t), λ 1 (t), λ 2 (t)), with initial conditions on the states (T 0 , E 0 ) and final conditions on the adjoint states λ 1 (t f ), λ 2 (t f )) = (0, 0).Shooting method consists of solving the coupled dynamics (4.1),where a value of λ 1 (0) and λ 2 (0) are provided.This means that one has to solve a differential-difference boundary value problem where both forward and backward terms of time appear within mixed type differential equations.It follows that, in order to initialize successfully a shooting method for (4.1), a guess of the initial value of the adjoint vector is necessary.
On the other hand, the order of magnitude of the control u is 0, while the density of the tumor cells has order of magnitude 7. Therefore, for numerical simulations, we non-dimensionalize system equations (2.1) by choosing an order-of-magnitude concentration scale for the cell populations E and T , E 0 and T 0 , respectively.The dynamical system (2.1) becomes: We used E 0 = 10 5 , T 0 = 10 7 and parameter values in Table 1 to define the new values of the non-dimensionalized system (4.2) as follow: We find numerically optimal control minimizing the scaling objective functional In what follows we simulate theoretical results according to the different values of w 1 and w 2 .We simulate two types of scenarios: the first in which, minimizing tumor cell population is privileged, which means that the value of w 1 is higher then w 2 and the second in which we give more importance to minimize the chemotherapy dose.With the ε weights specified as follows: ε1 = 10 7 T 0 = 1 and ε2 = 10 2 E 0 T 0 = 1.
Figure 1 illustrates the variation of the cell population and drug concentration level, we choose w 1 = 0.6 and w 2 = 0.2.In these numerical simulations, the number of days of treatment has been limited to 30.In this case, a maximum dose of chemotherapy is given at the beginning of treatment to reduce the density of tumor cells to a much lower level.After almost two days of treatment period, the medicine is completely shut off and is never again turned on during one month.While the immune cell population has a chance to recover. Figure 2 shows the results of the second case in which we privilege to minimize chemotherapy dose by choosing w 1 = 0.2 and w 2 = 0.6.The chemotherapy is introduced at the maximum level for a short period.Therefore, tumor cells decrease slowly to reach the same minimum value as the previous case at the end of treatment where no chemotherapy is given, while effector cells still increasing over time.
In both cases studied, a maximum dose of chemotherapy at the beginning of treatment is sufficient to control the tumor.Since the chemotherapy does not remain in the patient's body and degrades over time, the effector cells have chance to resume their growth to reach a maximum density at the end of treatment, this strong immune system keeps the tumor regrowth rate low.The optimal control in this run is bang-bang, we do not encounter a singular control by running the system dynamics for 30 days.We note that the parameters of the problem are such that the singular control does not satisfy the necessary second order condition (3.8).In general case, the curve represented in Figure 3 illustrates solutions of the problem (2.7) using different values of variables w 1 and w 2 .This curve corresponds to the Pareto Front that provides a useful tool to find possible compromises between the different cost functions J 1 , J 2 and J 3 .The intuitive meaning of this curve is that elements which are not on the Pareto front are never the best choice.In contrast, solutions which are on the front Pareto are Pareto solutions and could be the best choice, depending how the user decides to trade-off the weights of the various variables.
From the results obtained in the various cases studied, we can affirm that, in each case, we have Pareto optimal solutions that can help the medical staff to make the best choice for the treatment of a cancer patient.The cases, in which we privilege the density of tumor cells, give better results concerning the density of population cells and the administered doses of chemotherapy which can offers best choices that will help doctors to choose the best therapeutic protocol to minimize tumor cells while minimizing treatment doses.

Conclusion
Our study of a linear optimal control was based on a mathematical model incorporating interacting tumor and effector cell populations and their responses to chemotherapy treatment.We invoked the multiobjective optimization in order to minimize the density of tumor cells as well as the amount of chemotherapy drugs and maximize the density of immune cells.For the three types of objective functionals analyzed, we characterized an optimal Pareto solutions by using the scalarization method.Then, an optimal control were explicitly characterized and were successful in reducing the tumor density to near zero.
We have simulated theoretical results according to different values of weight parameters w 1 and w 2 .In all studied cases, the linear control have similar behavior in the administration of the chemotherapy doses.Numerical simulations demonstrates that a burst of treatment at the beginning is the best way to fight cancer.If a maximum dose of chemotherapy treatment can be administered to quickly kill the tumor, the effector cells have chance to resume their growth and boost the immune system, then the cancer can be effectively controlled.

Figure 3 .
Figure 3. Representation of J 3 as a function of J 1 and J 2 using different values of w 1 and w 2 .

Table 1 .
Parameters of the dynamic system.
< 0 Singular controls appear if switching function vanishes on a time interval.Let assume that u * (t) is singular on an interval [t 1 , t 2 ], which means that Φ(t) vanishes on [t 1 , t 2 ].Setting the first time derivative of the switching function Φ to zero, we obtain