Skip to main content

Decision Trees Theory

Decision Trees are hierarchical models used for classification and regression tasks. They work by recursively splitting the data based on feature values, forming a tree-like structure of decision nodes and leaf nodes. In this article, we will explore the key theoretical concepts behind how Decision Trees work, including:

  • Splitting criteria such as Gini Impurity and Information Gain.
  • How Decision Trees handle both classification and regression.
  • The concept of overfitting and pruning to optimize model performance.
  • The trade-offs between depth and generalization.

1. The Structure of a Decision Tree

1.1. Root, Decision, and Leaf Nodes

A Decision Tree consists of three types of nodes:

  • Root Node: The top node that represents the entire dataset. It splits the data based on the best feature, which provides the most information gain or reduces impurity the most.
  • Decision Nodes: Intermediate nodes that split the dataset into subsets based on specific conditions related to the input features.
  • Leaf Nodes: These are the terminal nodes that represent the final prediction, whether it be a class label (for classification) or a value (for regression).

1.2. How Splitting Works

At each decision node, the tree chooses a feature and a threshold to split the data into subsets. The goal is to maximize the purity of the resulting subsets, which means making the subsets as homogeneous as possible.


2. Splitting Criteria

Decision Trees use specific metrics to evaluate how good a particular split is. Two of the most commonly used metrics are Gini Impurity and Information Gain (based on Entropy).

2.1. Gini Impurity

Gini Impurity is a measure of how "mixed" the classes are within a node. The goal is to reduce impurity as much as possible with each split. A node is "pure" when it contains only instances of one class, and impure when it contains a mixture of classes.

  • The formula for Gini Impurity is:
Gini=1i=1kpi2\text{Gini} = 1 - \sum_{i=1}^{k} p_i^2

Where:

  • pip_i is the proportion of samples belonging to class ii at a particular node.
  • kk is the number of classes.

2.2. Information Gain and Entropy

Information Gain measures the reduction in uncertainty (or entropy) after splitting a node. The algorithm selects the split that maximizes information gain, which leads to purer child nodes.

  • The formula for Entropy is:
Entropy=i=1kpilog2(pi)\text{Entropy} = - \sum_{i=1}^{k} p_i \log_2(p_i)

Where:

  • pip_i is the proportion of samples in class ii.
  • kk is the number of classes.

Information Gain is the difference in entropy before and after the split:

Information Gain=Entropy beforeEntropy after\text{Information Gain} = \text{Entropy before} - \text{Entropy after}

2.3. Example Calculation

Suppose a node contains 10 samples: 7 are labeled class 0, and 3 are labeled class 1.

  • The Gini Impurity would be:
Gini=1(0.72+0.32)=0.42\text{Gini} = 1 - (0.7^2 + 0.3^2) = 0.42
  • The Entropy would be:
Entropy=(0.7log2(0.7)+0.3log2(0.3))=0.88\text{Entropy} = - (0.7 \log_2(0.7) + 0.3 \log_2(0.3)) = 0.88

When the tree splits this node into child nodes, it calculates the new impurity or entropy for the child nodes and chooses the split that maximizes information gain or minimizes impurity.


3. Decision Trees for Classification

In classification tasks, Decision Trees recursively split the data based on features until the leaf nodes are pure (or nearly pure). The goal is to find splits that make each child node as homogeneous as possible.

3.1. Classification Example

Imagine a dataset with features such as age, income, and marital status to predict whether someone will buy a product:

  1. The root node might split the data based on age (e.g., age > 30).
  2. The next split might be based on income (e.g., income > $50,000).
  3. The tree continues splitting until each leaf node contains people who are either likely to buy the product or not.

Each decision path in the tree can be interpreted as a set of rules, making Decision Trees highly interpretable.

3.2. Stopping Criteria for Classification

To prevent the tree from growing indefinitely, several stopping criteria are used:

  • Maximum Depth: The tree stops growing after reaching a specified depth.
  • Minimum Samples per Leaf: The tree stops splitting when the number of samples in a node is below a threshold.
  • Impurity Threshold: The tree stops splitting when the decrease in impurity is below a certain value.

4. Decision Trees for Regression

In regression tasks, Decision Trees predict a continuous value rather than a class label. The goal is to minimize the variance of the target variable in each node, and splits are chosen to reduce this variance.

4.1. Regression Example

For example, predicting house prices based on features such as size, number of bedrooms, and location:

  1. The tree might split the data based on house size (e.g., size > 2000 sq ft).
  2. It could then split based on location (e.g., located in a certain neighborhood).
  3. The tree continues splitting until the variance of house prices in each leaf node is minimized.

4.2. Loss Function for Regression

In regression tasks, Decision Trees minimize the Mean Squared Error (MSE) in each node:

MSE=1ni=1n(yiy^)2\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y})^2

Where:

  • yiy_i is the actual value.
  • y^\hat{y} is the predicted value (the average of the target values in the node).
  • nn is the number of samples in the node.

The tree selects splits that minimize the MSE and continues splitting until a stopping criterion is reached.


5. Overfitting and Pruning

One of the biggest challenges with Decision Trees is overfitting. A tree that grows too deep captures noise in the training data, resulting in poor generalization to unseen data.

5.1. Overfitting

A tree is said to overfit when it models not only the true patterns in the data but also the random noise. Overfitted trees often have very high accuracy on the training set but perform poorly on the test set.

5.2. Pruning

To combat overfitting, we use pruning to remove parts of the tree that capture noise or irrelevant details. There are two types of pruning:

  1. Pre-pruning (early stopping):
    • Stop the tree from growing once a certain depth or minimum number of samples per leaf is reached.
  2. Post-pruning (cost complexity pruning):
    • Grow the tree fully, then remove branches that have little importance by adding a penalty for model complexity.

In cost complexity pruning, a penalty term is added to the loss function to control the depth of the tree:

Cost complexity=Impurity+α×Tree size\text{Cost complexity} = \text{Impurity} + \alpha \times \text{Tree size}

Where:

  • α\alpha is a regularization parameter that controls how aggressively the tree is pruned.

6. Depth vs. Generalization

There is a trade-off between tree depth and generalization:

  • Shallow Trees: A shallow tree (with fewer splits) is less likely to overfit but may underfit, failing to capture important patterns in the data.
  • Deep Trees: A deep tree captures more detail and may fit the training data perfectly, but it risks overfitting and poor generalization to new data.

By adjusting the depth, the goal is to strike a balance between fitting the training data well and ensuring the model generalizes to unseen data.


Summary

In this article, we explored the theoretical foundations of Decision Trees, focusing on:

  • The key structure of Decision Trees: root nodes, decision nodes, and leaf nodes.
  • How splitting criteria like Gini Impurity and Information Gain guide the tree-building process.
  • How Decision Trees handle both classification and regression tasks.
  • Techniques like pruning and stopping criteria to avoid overfitting.

Understanding these concepts is essential for building efficient and interpretable Decision Tree models. In the next sections, we will explore practical examples of implementing Decision Trees using popular machine learning libraries such as scikit-learn, TensorFlow, and PyTorch.