- Consider the complexity of the models. A smaller model that explains the data just as well as a larger model is generally preferred, as it is simpler and easier to interpret.
- Look at other goodness-of-fit metrics, such as the ‘adjusted R-squared', 'AIC (Akaike Information Criterion)’, or ‘BIC (Bayesian Information Criterion)'. These metrics penalize more complex models, so a smaller model may perform better. The 'F-test' can also be used to test whether the larger model (v2) is significantly better than the smaller model (v1).
- Conduct ‘cross-validation’ or use a hold-out dataset to test the performance of the models on new data. The model with better out-of-sample performance is generally preferred.
How to compare two nested models when they have very small R_squared diffrence.
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
I have two models, ie v1 = a1 + a2*f + a3*f2 and v2 = k( a1 + a2*f a3*f^2)
0 Kommentare
Antworten (1)
Rohit
am 20 Mär. 2023
When comparing two nested models with very small differences in R-squared, it is important to consider other metrics and factors to determine which model is better.
Here are some suggestions:
0 Kommentare
Siehe auch
Kategorien
Mehr zu Linear and Nonlinear Regression finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!