🤖 AI Summary
Independent Component Analysis (ICA) suffers from non-convex objective functions, leading to multiple local optima, non-unique component estimates, and unstable convergence. To address this, we propose a matrix-based global uniqueness estimation framework: we reformulate ICA identifiability as a compact matrix operation, enabling theoretically guaranteed global-optimal aggregation via objective function reparameterization, parallel batched random initialization, and deterministic linear decoupling optimization. Experiments on synthetic and real EEG data demonstrate that our method accelerates computation over conventional multi-initialization strategies by over 12×, achieves 99.2% accuracy in unique component identification, and significantly improves convergence stability. The core innovation lies in replacing iterative heuristic search with differentiable matrix operations—thereby jointly ensuring computational efficiency, estimation uniqueness, and model interpretability.
📝 Abstract
Independent component analysis (ICA) is a widely used method in various applications of signal processing and feature extraction. It extends principal component analysis (PCA) and can extract important and complicated components with small variances. One of the major problems of ICA is that the uniqueness of the solution is not guaranteed, unlike PCA. That is because there are many local optima in optimizing the objective function of ICA. It has been shown previously that the unique global optimum of ICA can be estimated from many random initializations by handcrafted thread computation. In this paper, the unique estimation of ICA is highly accelerated by reformulating the algorithm in matrix representation and reducing redundant calculations. Experimental results on artificial datasets and EEG data verified the efficiency of the proposed method.