Abstract :
Summary form only given. The LBG algorithm and Kohonen learning algorithm (KLA) both have the problem of being trapped in local minima, the empty cell in LBG and the never winning codevector in KLA. Although compared to the LBG algorithm, Kohonen learning actually relaxes the initial limitation in design, it still relies on initial conditions. We point out the principle of maximum information preservation for estimation of an unknown probability density function and unsupervised learning. Accordingly, we introduce the winning-weighted competition in the design phase of vector quantizers and the corresponding implementation methods. The winning-weighted competitive learning (WWCL) proposed in this paper consists of the competition rule with the winning-weighted distortion measure dp(X,Y)=(1+λ(p-1/N))||X-Y||2 instead of direct Euclidean distance, the win rate update pi(t+1)=pi(t)+α/M where the competition status ci (winner or loser) of the ith codevector is zero or one, and the codevector learning law Yi(t+1)=Yi(t)+ciα(t)(X-Yi(t)). It allows neurons to win with roughly equal probability, achieves the corresponding approximate distribution of synaptic vectors for an input space, and eliminates the initial condition in vector quantizer design. The performance of our algorithm is usually better than Kohonen learning and the LBG algorithm in the sense of expected distortion and/or learning speed, and the global optima were obtained. Experimental results of the proposed learning scheme are presented and compared with the LBG algorithm and Kohonen learning algorithm
Keywords :
image coding; neural nets; probability; unsupervised learning; vector quantisation; Kohonen learning algorithm; LBG algorithm; codevector learning law; competition rule; competition status; experimental results; global optima; image coding; image compression; input space; learning speed; maximum information preservation; probability density function; synaptic vectors; unsupervised learning; vector quantizers; win rate update; winning-weighted competition; winning-weighted competitive learning; winning-weighted distortion measure; Algorithm design and analysis; Distortion measurement; Euclidean distance; Image coding; Neurons; Partitioning algorithms; Probability density function; Testing; Unsupervised learning; Vector quantization;