Studies in Chaos and Finance: Chapter I

A mathematician is a device for turning coffee into theorems – Alfréd Rényi

One of the notable limitations of a standard autoregressive model is that it intrinsically assumes distributive homogeneity across the historical time horizon. A system’s impulse response to a change in the value of a shock term, k, at some time-step, t_{n - i}, must also account for influences imposed by external systems evolving in parallel – specially if there exist a correlation known to be of particular significance. These nuanced characteristics of real-world scenarios further complicate an autoregressive model’s broad application as a time-varying forecast. This new series explores mathematical machinery borrowed from Itô calculus as a means to derive a systematic solution to an n-state autoregressive model where significant correlation exists between two interacting time-varying processes with underlying random components.

Imagine a singular stationary point in a closed system. An infinitesimally small region containing within it maximum information in a state of equiprobability. In a state of such rigid order the propensity for information mobility is minimized. Now consider allowing this system to interact with another time-varying process. The structural stability of our information space due to volatility from information gain, deteriorates as its entropy increases. This process is accelerated as our system evolves forward in time.

Suppose that the entropic state for the subset of initial information S_i, observed at time t_n, is bound by the preceding entropic state at time t_{n-1}. Therefore, in the case of discrete intervals we have

{\displaystyle x_t = \alpha x_{t-1}+z_t }     (1)

where \alpha is a scale parameter and z_t represents white noise. Rearranging the terms in (1) we can show

{\displaystyle (1-\alpha) x_t+\alpha\nabla x_t = z_t }     (2)

This result derives from the fact that the first difference of a random walk forms a purely random process. The analogous interpretation of (2) in continuous time can be described by the general form

{\displaystyle \alpha x(t)+\frac {\partial x(t)}{\partial t} = z(t) }     (3)

This expression is what is commonly called a first-order continuous autoregressive equation CAR(1). A CAR process of order p is generally represented by the following equation

{\displaystyle x^p(t)+\alpha_{p-1}x^{p-1}(t)+...+\alpha_0x(t) = z(t) }     (4)

Note that z(t) represents a continuous white noise process which cannot physically exist. We will instead replace this term with one that represents small infinitesimal changes characterized by Gaussian orthogonal increments. That is to say that for any two non-overlapping time intervals \left\{t_a, t_b\right\} and \left\{t_c, t_d\right\} the increments W^d_{t}-W^c_{t} are independent of past values W^b_{t}-W^a_{t}. Furthermore, W_0 is always zero. W_t is described as pure a Wiener process. We will rewrite (3) in its first-order stochastic differential form

{\displaystyle dx_t = \alpha x_tdt + \sigma dW_t }     (5)

where \alpha and \sigma are more formally referred to as drift and volatility. Expression (5) appears in the well known Ornstein-Uhlenbeck model. It is, however, an incomplete characterization of our particular chaotic system. This is because our information space is no longer a closed system, rather one that is interacting with another chaotic system with a systematic influence on the state variable x(t). As a result of this, the variability of distribution throughout time is no longer constant. Our system is said to be heteroskedastic. In order to account for this non-linearity we can relax orthogonality by introducing a function f that describes the relationship between these interacting systems

{\displaystyle dx^{i,j}_t = \alpha x^{i,j}_tdt + f(\sigma W^{\{i,j\}}_t) }     (6)

Let us define a \tilde{W} that is some linear combination of our closed system, W^i_t, and the outside system, W^j_t. We can formalize this interpretation by writing out the total differential form

{\displaystyle df(\textbf{X}) = \sum_ {k = 1}^{n}  \frac {\partial f(\textbf{X})}{\partial x_k} \, dx_k }     (7)

which can be re-written as

{\displaystyle \int df(\textbf{X}) = \idotsint_S \, \sum_ {k = 1}^{n} \frac {\partial f(\textbf{X})}{\partial x_k} \, dx_i \quad dx_1 \dots dx_n }     (8)

Now lets assume that

{\displaystyle \frac {\partial f(\textbf{X})}{\partial x_k} = z_k }     (9)

which implies

{\displaystyle \int d\tilde{W}(\textbf{Z}) = \idotsint_S \, \sum_ {k = 1}^{n} z_k \, dz_k \quad dz_1 \dots dz_n }     (10)

where \tilde{W}(\textbf{Z}) \mapsto f(\textbf{X}). For n = 2 the above expression reduces to

{\displaystyle \tilde{W}(z_i, z_j) = \frac{1}{2} (z_i^2 + z_j^2) }     (11)

assuming zero constants of integration and where z_i and z_j are independent \mathcal{N} \sim (0, t). Based on this we can define the following

{\displaystyle W^{\{i\}}_t = \sigma_i z_i }     (12)

{\displaystyle W^{\{j\}}_t = \sigma_j (\rho z_i + \sqrt{1 - \rho^{2}} z_j) }     (13)

where \rho \in \{-1, 1\} and W^{\{j\}}_t describes the interaction between stochastic systems i and j. Solving for z_i and z_j we have

{\displaystyle z_i = \frac {1}{\sigma_1} \, W^{\{i\}}_t }     (14)

{\displaystyle z_j = \frac {\sigma_i W^{\{j\}}_t - \rho \sigma_j W^{\{i\}}_t}{\sigma_i \sigma_j \sqrt{1 - \rho^2}} }     (15)

Therefore, it follows

{\displaystyle \begin{aligned}     \tilde{W}(z_i, z_j) & = \frac {1}{2} [\frac {1}{\sigma_i} \, W^{\{i\},2}_t + (\frac {\sigma_i W^{\{j\}}_t - \rho \sigma_j W^{\{i\}}_t}{\sigma_i \sigma_j \sqrt{1 - \rho^2}})^2] \\     & = - \frac {1}{2(1 - \rho^2)} [\frac {W^{\{i\},2}_t}{\sigma_i^2} + \frac{W^{\{j\},2}_t}{\sigma_j^2} - 2 \rho \frac {W^{\{i\}}_t W^{\{j\}}_t}{\sigma_i \sigma_j}] \end{aligned} }     (16)

Given the result above, we can re-write (6)

{\displaystyle dx^{i,j}_t = \alpha x^{i,j}_tdt + \tilde{W}(z_i, z_j) }     (17)

In the next chapter we will outline the steps that solve for x(t), extending these results to derive an n-state CAR framework.

Leave a comment