**Institution:** VHB ProDok

**Lecturer:** Prof. Dr. Stefan Helber (Leibniz Universität Hannover)

**Date:** 27. – 30. März 2017

**Place:** Hannover

**Registration: **Please click on the link to open the registration form or send an email to doktorandenprogramm(at)vhbonline(dot)org.

**Abstract:
**

The course covers the basic elements of i) Markovian models of stochastic systems and ii) Markovian decision processes plus some basic elements of inventory theory. In this course, participants will learn how to construct and use these particular classes of probabilistic models of systems and decision processes. The defining feature of the Markovian models is the memoryless property of the underlying stochastic processes. It essentially states that the future behavior of a system depends only on its current state, but not its previous history. The participants will learn why and how this often makes it possible to determine the probabilities of the different system states and how these probabilities can then be used to determine performance measures of the system or to assign economic values to decisions made in an uncertain environment.

*Content*

We will start by describing the defining features and variants of stochastic processes, focusing on those processes with a discrete state space. We will initially cover Markovian processes in discrete time as their modeling and analysis happens to be straightforward, clear and illustrative. On this basis, we will then address Markovian processes in continuous time, the so-called Markov Chains. We will describe Poisson processes, the famous memoryless property of the exponential distribution and derive the necessary and sufficient balance equations for the steady-state analysis of Markov Chains in continuous time. The participants will install and use Scilab, an free and open-source software for numerical computation (www.scilab.org), on their machines to perform numerical experiments and solve small exercises.

Based on this foundation, we will then address the analysis of Markovian queuing systems and show how to determine the probability of having a given number of customers or jobs in the system or the queue. After introducing and explaining Little’s Law, we will use it to determine expected waiting or cycle times. Closing this introduction to queuing models, we will briefly explain and demonstrate Kingman’s approximation for the expected waiting time G/G/1 and G/G/N queueing model.

In the next section, we will briefly cover basic elements of the probabilistic analysis of inventory systems. We will introduce the classical newsvendor problem and use it to explain and apply the first order loss function for the case of normally distributed demand.

The next part of the course will be devoted to stochastic (dynamic) programming, also known as stochastic decision processes. Starting with a finite-horizon deterministic setting, we will treat the basic version of Bellman’s recursion equation and explain its usage to determine an optimal course of actions in a multi-stage decision situation. On this foundation, we will next consider the stochastic case of a finite-horizon Markovian Decision Process in discrete time, explain the optimality equations and their solution via the backward induction algorithm. Finally, we will consider the infinite-horizon, continuous time case of discounted Markov decision processes and the value and policy iteration algorithms to determine policies that are optimal in expectation.

On the fourth day, we will consider a linkage between stochastic modeling and optimization by treating two-stage linear programs using scenario techniques. Here we will use the GAMS modeling system to essentially embed a numerical simulation within an (formally deterministic) linear or mixed-integer linear optimization program. We will furthermore discuss either (and preferably) current research projects of the participants that involve probabilistic modeling or (alternatively) some interesting papers showing the methodologies taught in the course.