🤖 AI Summary
This study addresses the challenge of identifying potential target firms for shareholder activism by activist investment funds—enabling portfolio companies to mitigate intervention risks, funds to optimize stock selection, and investors to exploit arbitrage opportunities. We propose an interpretable machine learning framework that uniquely integrates game-theoretic feature engineering with a rule-based attention mechanism to ensure causal attribution and traceable investment logic. The pipeline combines SHAP-based interpretability analysis, LightGBM for prediction, and symbolic regression for post-hoc logical refinement. Evaluated on five years of U.S. equity data, the model achieves an AUC of 0.89 and exceeds 92% accuracy in explaining key feature contributions. Its strong interpretability, transparency, and regulatory compliance have led to its adoption in the U.S. Securities and Exchange Commission’s (SEC) regulatory sandbox pilot program.