Problems of Control and Information Theory 3. (Budapest, 1974)

1974 / 2. szám - Nemetz, T.: On the alfa-divergence rate for Markov-dependent hypotheses

Problems of Control and Information Theory, Vol. 3(2), pp. 147 —155 (1974) ON THE «-DIVERGENCE RATE FOR MARKOV-DEPENDENT HYPOTHESES* T. NEMETZ (Budapest) (Received November 13, 1973) In testing statistical hypotheses Rényi’s а-divergence plays a very impor­tant role. In this paper we give an asymptotic expression for the а-divergence of two probability measures associated with Markov chains and discuss its con­nection with the error probability. 1. Introduction The discussion in this paper is motivated mainly by the following simple vs. simple hypothesis testing problem, where the Bayesian approach is accepted. Let # be a parameter-random variable with the a priori probability distribution w0 = Prob (ß = i)0) = 1 Prob (í) = /),) = 1 — wv where wo • wi > 0-Let I be a random variable defined on a certain probability space and taking its values on the sample space {A, 6Z}. Denote the probability measure generated by | on {A, A} under the hypotheses Ж0 : = {{> = #„} and %г : = = {$ = A,} by P and Q respectively, and let the corresponding density func) tions with respect to a given dominating cr-finite measure p be p(x) and ^ir­respectively. Let us consider an arbitrary decision d(|) on the basis of the observation f. The error probability of the decision rule d(.) is defined as the probability of wrong decision: e(d) = w0P(d(£) = #x) + u\Q{d{k) = b0). The minimal error probability e(w0, P, Q) = infe(d) (d) will be referred simply as error probability. It is well known (see e.g. Rényi [9]) that the minimal error probability may always be achieved. * An earlier version of this paper was presented at the Sixth Prague Conference on Information Theory, Prague, September, 1971.

Next