Abstract :
Motivated by the portfolio management problem, we propose a composite model for Markov processes. The state space of a composite Markov process consists of two parts, J and J in the Euclidean space Rn. When the process is in, it evolves like a continuous-time Levy process; and once the process enters J, it makes a jump (with a finite size) instantly according to a transition function like a direct-time Markov chain. The composite Markov process provides a new model for the impulse stochastic control problem, with the instant jumps in J modeling the impulse control feature (e.g., selling or buying stocks in the portfolio management problem). With this model, we show that an optimal policy can be obtained by a direct comparison of the performance of any two policies.