Adjust the parameters to see how different polynomial models fit the data and how the error metrics change.
MSE (Mean Squared Error): Average of the squared differences between predicted and actual values. Lower values indicate better fit.
\[ MSE = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 \]
Where \(y_i\) is the actual value, \(\hat{y}_i\) is the predicted value from the model, and \(n\) is the number of data points.
MAE (Mean Absolute Error): Average of the absolute differences between predicted and actual values. More robust to outliers than MSE.
\[ MAE = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y}_i| \]
Where \(y_i\) is the actual value, \(\hat{y}_i\) is the predicted value from the model, and \(n\) is the number of data points.
R² (R-squared): Proportion of variance explained by the model, from 0 to 1. Higher values indicate better fit (1 = perfect fit).
\[ R^2 = 1 - \frac{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2} \]
Where \(y_i\) is the actual value, \(\hat{y}_i\) is the predicted value, \(\bar{y}\) is the mean of actual values, and \(n\) is the number of data points.