Hex, Bugs and More Physics | Emre S. Tasci

a blog about physics, computation, computational physics and materials…

Octave Functions for Information Gain (Mutual Information)

April 7, 2008 Posted by Emre S. Tasci

Here is some set of functions freshly coded in GNU Octave that deal with Information Gain and related quantities.

For theoretical background and mathematica correspondence of the functions refer to : A Crash Course on Information Gain & some other DataMining terms and How To Get There (my 21/11/2007dated post)

function [t r p] =  est_members(x)

Finds the distinct members of given vector x returns 3 vectors r, t and p where :

r : ordered unique member list
t : number of occurences of each element in r
p : probability of each element in r

 Example :

octave:234> x
x =
   1   2   3   3   2   1   1

octave:235> [r t p] = est_members(x)
r =
   1   2   3
t =
   3   2   2
p =
   0.42857   0.28571   0.28571

 

function C = est_scent(z,x,y,y_n) 

Calculates the specific conditional entropy H(X|Y=y_n)

It is assumed that:
  X is stored in the x. row of z
  Y is stored in the y. row of z

Example :

octave:236> y
y =
   1   2   1   1   2   3   2

octave:237> z=[x;y]
z =
   1   2   3   3   2   1   1
   1   2   1   1   2   3   2

octave:238> est_scent(z,1,2,2)

ans =  0.63651

 

function cent = est_cent(z,x,y)

Calculates the conditional entropy H(X|Y)
It is assumed that:
 X is stored in the x. row of z
 Y is stored in the y. row of z

Example :

octave:239> est_cent(z,1,2)
ans =  0.54558

 

function ig = est_ig(z,x,y)

Calculates the Information Gain (Mutual Information) IG(X|Y) = H(X) – H(X|Y)
It is assumed that:
  X is stored in the x. row of z
  Y is stored in the y. row of z

Example :

octave:240> est_ig(z,1,2)
ans =  0.53341

 

function ig = est_ig2(x,y)
Calculates the Information Gain IG(X|Y) = H(X) – H(X|Y)

Example :

octave:186> est_ig2(x,y)
ans =  0.53341

 

function ig = est_ig2n(x,y)
Calculates the Normalized Information Gain
 IG(X|Y) = [H(X) – H(X|Y)]/min(H(X),H(Y))

Example :

octave:187> est_ig2n(x,y)
ans =  0.53116

 

function ent = est_entropy(p)

Calculates the entropy for the given probabilities vector p :
H(P) = – SUM(p_i * Log(p_i))

Example :

octave:241> p
p =
   0.42857   0.28571   0.28571

octave:242> est_entropy(p)
ans =  1.0790

 

function ent = est_entropy_from_values(x)

Calculates the entropy for the given values vector x:

H(P) = -Sum(p(a_i) * Log(p(a_i))

where {a} is the set of possible values for x.

Example :

octave:243> x
x =
   1   2   3   3   2   1   1

octave:244> est_entropy_from_values(x)
ans =  1.0790


Supplementary function files:

function [t r p] =  est_members(x)
% Finds the distinct members of x
% r : ordered unique member list
% t : number of occurences of each element in r
% p : probability of each element in r
% Emre S. Tasci 7/4/2008
    t = unique(x);
    l = 0;
    for i = t
        l++;
        r(l) = length(find(x==i));
    endfor
    N = length(x);
    p = r/N;
endfunction

   
function C = est_scent(z,x,y,y_n)
% Calculates the specific conditional entropy H(X|Y=y_n)
% It is assumed that:
%   X is stored in the x. row of z
%   Y is stored in the y. row of z
%   y_n is located in the list of possible values of y
%       (i.e.   [r t p] = est_members(z(y,:))
%               y_n = r(n)
%   Emre S. Tasci 7/4/2008
[r t p] = est_members(z(x,:)(z(y,:)==y_n));
C = est_entropy(p);
endfunction

 

function cent = est_cent(z,x,y)
% Calculates the conditional entropy H(X|Y)
% It is assumed that:
%   X is stored in the x. row of z
%   Y is stored in the y. row of z
% Emre S. Tasci 7/4/2008
cent = 0;
j = 0;
[r t p] = est_members(z(y,:));
for i=r
    j++;
    cent += p(j)*est_scent(z,x,y,i);
endfor
endfunction

 

function ig = est_ig(z,x,y)
% Calculates the Information Gain IG(X|Y) = H(X) – H(X|Y)
%   X is stored in the x. row of z
%   Y is stored in the y. row of z
% Emre S. Tasci 7/4/2008
[r t p] = est_members(z(x,:));
ig = est_entropy(p) – est_cent(z,x,y);
endfunction

 

function ig = est_ig2(x,y)
% Calculates the Information Gain IG(X|Y) = H(X) – H(X|Y)
% Emre S. Tasci <e.tasci@tudelft.nl> 8/4/2008
z = [x;y];
[r t p] = est_members(z(1,:));
ig = est_entropy(p) – est_cent(z,1,2);
endfunction

 

function ig = est_ig2n(x,y)
% Calculates the Normalized Information Gain
% IG(X|Y) = [H(X) – H(X|Y)]/min(H(X),H(Y))
% Emre S. Tasci <e.tasci@tudelft.nl> 8/4/2008
z = [x;y];
[r t p] = est_members(z(1,:));
entx = est_entropy(p);
enty = est_entropy_from_values(y);
minent = min(entx,enty);
ig = (entx – est_cent(z,1,2))/minent;
endfunction

 

function ent = est_entropy(p)
% Calculates the Entropy of the given probability vector X:
%       H(X) = – Sigma(X*Log(x))
%   If you want to directly calculate the entropy from values array
%       use est_entropy_from_values(X) function
%    
% Emre S. Tasci 7/4/2008
ent = 0;
    for i = p
        ent += -i*log(i);
    endfor
endfunction

 

function ent = est_entropy_from_values(x)
% Calculates the Entropy of the given set of values X:
%       H(X) = – Sigma(p(X_i)*Log(p(X_i)))
%   If you want to calculate the entropy from probabilities array
%       use est_entropy(X) function
%    
% Emre S. Tasci 7/4/2008
ent = 0;
[r t p] = est_members(x);
    for i = p
        ent += -i*log(i);
    endfor
endfunction

Leave a Reply