🤖 AI Summary
To address the absence of closed-form solutions for complex joint distributions (e.g., $P(Y mid z_1,dots,z_N)$) and the reliance on computationally intensive approximate inference methods such as MCMC, this paper proposes the Event-Driven Probabilistic Neural Network (IPNN) framework. IPNN directly models neural network outputs as measurable events within an extended probability space, enabling joint learning of classification and unsupervised clustering through event statistics and logical reasoning. Its core contributions are: (i) the first neural architecture grounded in event-space modeling and statistical inference; and (ii) strong theoretical guarantees supporting ultra-lightweight recognition even for ultra-large-scale label spaces (e.g., trillion-class). Experiments demonstrate that a compact network with only hundreds of nodes achieves efficient classification over billion-plus classes, drastically reducing computational and memory overhead.
📝 Abstract
We propose a new general model called IPNN - Indeterminate Probability Neural Network, which combines neural network and probability theory together. In the classical probability theory, the calculation of probability is based on the occurrence of events, which is hardly used in current neural networks. In this paper, we propose a new general probability theory, which is an extension of classical probability theory, and makes classical probability theory a special case to our theory. Besides, for our proposed neural network framework, the output of neural network is defined as probability events, and based on the statistical analysis of these events, the inference model for classification task is deduced. IPNN shows new property: It can perform unsupervised clustering while doing classification. Besides, IPNN is capable of making very large classification with very small neural network, e.g. model with 100 output nodes can classify 10 billion categories. Theoretical advantages are reflected in experimental results.