🤖 AI Summary
Existing electrochemical impedance spectroscopy (EIS) data fitting lacks systematic evaluation of loss function selection for equivalent circuit modeling. Method: We propose two novel Bode-plot-based loss functions—log-B (logarithmic magnitude loss) and log-BW (weighted logarithmic magnitude loss)—and conduct a comprehensive comparison with conventional χ² and other losses via nonlinear least-squares optimization on standard EIS datasets, using R², χ², MAPE, and computational time as evaluation metrics. Results: While χ² achieves the highest absolute accuracy, log-B attains near-optimal fidelity (R² > 0.999) with 1.4× faster convergence and significantly reduced parameter estimation errors for most circuit elements. log-BW further improves low-frequency robustness. This work establishes a new efficiency–accuracy–balanced paradigm for large-scale EIS analysis.
📝 Abstract
Electrochemical impedance spectroscopy (EIS) data is typically modeled using an equivalent circuit model (ECM), with parameters obtained by minimizing a loss function via nonlinear least squares fitting. This paper introduces two new loss functions, log-B and log-BW, derived from the Bode representation of EIS. Using a large dataset of generated EIS data, the performance of proposed loss functions was evaluated alongside existing ones in terms of R2 scores, chi-squared, computational efficiency, and the mean absolute percentage error (MAPE) between the predicted component values and the original values. Statistical comparisons revealed that the choice of loss function impacts convergence, computational efficiency, quality of fit, and MAPE. Our analysis showed that X2 loss function (squared sum of residuals with proportional weighting) achieved the highest performance across multiple quality of fit metrics, making it the preferred choice when the quality of fit is the primary goal. On the other hand, log-B offered a slightly lower quality of fit while being approximately 1.4 times faster and producing lower MAPE for most circuit components, making log-B as a strong alternative. This is a critical factor for large-scale least squares fitting in data-driven applications, such as training machine learning models on extensive datasets or iterations.