๐ค AI Summary
In real-world agricultural scenarios, crop classification suffers from both long-tailed class distributions and severe label scarcity. Manually balancing the training set induces train-test distribution mismatch, critically undermining model generalization. To address this, we propose Dirichlet Prior Augmentation (DPA), the first method to explicitly model unknown target-domain label shift within a few-shot learning framework. DPA characterizes class priors via a Dirichlet distribution and integrates them into feature learning, enabling dynamic, differentiable, prior-driven regularization that adaptively calibrates decision boundaries. Experiments demonstrate that DPA significantly enhances training stability and substantially improves few-shot crop classification accuracy under realistic imbalanced distributions. Our approach establishes a transferable, distribution-robust learning paradigm for long-tailed few-shot vision tasks.
๐ Abstract
Real-world agricultural distributions often suffer from severe class imbalance, typically following a long-tailed distribution. Labeled datasets for crop-type classification are inherently scarce and remain costly to obtain. When working with such limited data, training sets are frequently constructed to be artificially balanced -- in particular in the case of few-shot learning -- failing to reflect real-world conditions. This mismatch induces a shift between training and test label distributions, degrading real-world generalization. To address this, we propose Dirichlet Prior Augmentation (DirPA), a novel method that simulates an unknown label distribution skew of the target domain proactively during model training. Specifically, we model the real-world distribution as Dirichlet-distributed random variables, effectively performing a prior augmentation during few-shot learning. Our experiments show that DirPA successfully shifts the decision boundary and stabilizes the training process by acting as a dynamic feature regularizer.