All models are wrong, some are useful ~
the idea > the math > the code
concept -> the math behind it -> python implementation (make a diagram)
High-level snapshot, in rough order of importance
Supervised Learning:
- Linear Regression (Simple, Supervised)
- Decision Trees (Moderate, Supervised)
- Random Forest (Moderate, Supervised)
- Support Vector Machines (Complex, Supervised)
- Neural Networks (Complex, Supervised)
Unsupervised Learning:
- K-Means Clustering (Moderate, Unsupervised)
- Principal Component Analysis (PCA) (Moderate, Unsupervised)
- Gaussian Mixture Model (GMM) (Moderate, Unsupervised)
- Hierarchical Clustering (Moderate, Unsupervised)
Reinforcement Learning:
- Q-Learning (Moderate, Reinforcement)
- Deep Q-Network (DQN) (Complex, Reinforcement)
- Proximal Policy Optimization (PPO) (Complex, Reinforcement)
Each algorithm’s position on the grid would roughly indicate its complexity and the type of learning it belongs to. Of course, this is an oversimplified representation, and there are many more algorithms and dimensions to consider in the real world of machine learning.
For a detailed and visually engaging exploration of machine learning algorithms

You have to keep it simple, stupid ~ Dr. Tatiana Bokavera
Fanciness does not matter. The bottom line is how well the model performs on the real data.
We could try to fit a Convolutional Megatron Quantum Gradient Neural Network, but if simple linear regression fits the data better, it is THE BETWR MODEL
Inputs/Outputs Assumptions Advantages/Disavtanges (when to use) Evaluating performance Imbalanced classes regularization Optimization The Math Deploying
What is the point in using a traditional model when neural networks exist?
This is a very important question. Here are some valid reasons not to use a neural network (roughly ranked in order of importance).
Success Specificity of Task
- For certain tasks, traditional models can actually perform better than neural networks.
- For example, in simple regression or classification tasks with clear linear relationships, a linear model might be more appropriate.
Success Simplicity and Efficiency
- Traditional models are often simpler and more efficient to train compared to neural networks. This can be particularly beneficial when dealing with smaller datasets or when computational resources are limited.
- Models like linear regression, decision trees, or logistic regression can be trained quickly and require less computational power.
Success Overfitting Concerns
- Neural networks, especially deep learning models, have a high capacity for learning complex patterns, which can lead to overfitting, especially with smaller datasets.
- Traditional models, with their simpler structure, can sometimes generalize better when data is limited or noisy.
Success Interpretability and Explainability
- Many traditional models offer greater interpretability. For instance, the coefficients in a linear regression model can provide direct insights into the relationship between features and the target variable.
- In sectors like healthcare or finance, where understanding the decision-making process is crucial, this transparency can be a significant advantage.
Success Data Availability
- Neural networks typically require large amounts of data to perform well. Traditional models can be more suitable for scenarios where data is scarce or expensive to acquire.
Success Feature Importance
- It’s often easier to analyze and understand the importance of different features in making predictions with traditional models.
- This can be valuable for feature engineering and understanding the underlying data structure.
Success Speed of Prediction
- Once trained, traditional models can be faster at making predictions, as they generally involve simpler computations compared to a deep neural network.
- This can be crucial in real-time applications.
Success Robustness to Changes in Data Distribution
- Some traditional models are more robust to changes in data distribution or non-stationarity in data.
- For instance, decision trees can handle shifts in data without needing retraining, unlike neural networks which might require retraining with new data to maintain performance.
Success Less Tuning Required
- Neural networks often require extensive hyperparameter tuning to achieve optimal performance.
- Traditional models typically have fewer hyperparameters, making them easier to tune.