Comments on: A Crash Course on Information Gain & some other DataMining terms and How To Get There http://www.emresururi.com/physics/?p=36 a blog about physics, computation, computational physics and materials... Mon, 20 Jun 2016 16:19:33 +0000 hourly 1 https://wordpress.org/?v=4.9.3 By: Hex, Bugs and More Physics | Emre S. Tasci » Blog Archive » Octave Functions for Information Gain (Mutual Information) http://www.emresururi.com/physics/?p=36&cpage=1#comment-24 Mon, 07 Apr 2008 15:22:24 +0000 http://www.emresururi.com/physics/?p=36#comment-24 […] theoretical background and mathematica version of the functions refer to : A Crash Course on Information Gain & some other DataMining terms and How To Get There (my 21/11/2007dated […]

]]>
By: admin http://www.emresururi.com/physics/?p=36&cpage=1#comment-6 Thu, 22 Nov 2007 13:01:52 +0000 http://www.emresururi.com/physics/?p=36#comment-6 If you run the example code, you will see that you are receving different values from the results presented here, even if you use the same 10 data. This is because of the definition of the entropy. In the entry, I’m using the binary entropy with Log base 2, but in the Mathematica functions I’ve posted, I switched the Log base to e for general purposes.

You can easily switch back to the former version by simply changing the line in the definition of the function DMEntropyEST from

Return[-Sum[problist[[i]] Log[problist[[i]]], {i,
Length[problist]}]];

to

Return[-Sum[problist[[i]] Log[2,problist[[i]]], {i, Length[problist]}]];

in the DataMining Functions Library file “estrefdm.nb”

]]>