Acta Mathematica Academiae Scientiarum Hungaricae 10. (1959)

1959 / 3-4. szám - Rényi A.: On measures of dependence

ON MEASURES OE DEPENDENCE By A. RÉNYI (Budapest), member of the Academy Introduction In this paper we shall discuss and compare certain quantities which are used to measure the strength of dependence (or correlation, in the widest sense of this word) between two random variables. We formulate seven ra­ther natural postulates which should be fulfilled by a suitable measure of dependence. The maximal correlation introduced by H. Gebelein [1] (for a more general treatment see [2]) fulfils all these postulates. As in our previous paper [2] we shall make use of the technique of conditional mean values, as developed by A. N. Kolmogorov [3], which is needed to define the different measures of dependence and to prove their properties, and the connections between them, under much more general conditions than this is usual in the literature. In § 1 we introduce the definitions and notations to be used throughout the paper. § 2 contains the definitions and fundamental properties of the mentioned measures of dependence. § 3 contains the proof of the main theo­rem of the present paper (Theorem 2), according to which the maximal correlation can be attained, provided that the mean square contingency is finite. § 1. Definitions and notations Let [fí, 6t, P] be a probability space (see [3]), i. e. 62 an arbitrary non-empty set whose elements will be denoted by со, 61 a о-algebra of sub­sets of Í2 whose elements will be denoted by capital letters A, В etc., and P = P(A) a probability measure on 61. We shall denote random variables on [62, 61, PJ (i. e. real functions defined on 62 and measurable with respect to 61) by Greek letters £, r\ etc. If £ is a random variable, we denote by M(£) its mean value and by D'2(£) its variance. If M(£) and D(£) exist and D(£)>0, we put (1) g. I—M(|) D(D and call the transformation by which £* is obtained from £ the standardi- 13 Acta Mathematica X 3—4

Next