Brain Box
Loading...
This site is best viewed in a modern browser with JavaScript enabled.
Something went wrong while trying to load the full version of this site. Try hard-refreshing this page to fix the error.
All Discussions
Why is dimension reduction important?
RMSE (Root Mean Squared Error) vs MAE (Mean Absolute Error)
When Cross-entropy is typically a better loss function than Mean Squared Error
Approximate probability distributions
MPE (Most Probable Explanation) vs. MAP (Maximum A Posteriori)
Mean Absolute Percentage Error (MAPE)
Why don’t we see more cross-validation in deep learning?
Explain different methods for cross-validation.
How do you know that your model is low variance, high bias?
How do you know that your model is high variance, low bias?
How’s bias-variance tradeoff related to overfitting and underfitting?
What’s the bias-variance trade-off?
Construct adjacency matrix
Collaborative filtering systems for recommendation
Is feature scaling necessary for kernel methods?
How is Naive Bayes classifier naive?
What is gradient boosting?
What’s linear separation? Why is it desirable when we use SVM?
Bagging and boosting are two popular ensembling methods
k-means and GMM are both powerful clustering algorithms.
« Previous Page
Next Page »