LightGBM Practical Example with PyTorch
In this example, we will integrate LightGBM with PyTorch to predict house prices using the California Housing Dataset. While LightGBM and PyTorch are used for different purposes, combining them allows you to benefit from the fast and efficient training of LightGBM and the deep learning capabilities of PyTorch.
We will follow these steps:
- Train a LightGBM model.
- Convert LightGBM predictions to PyTorch tensors.
- Integrate LightGBM predictions into a PyTorch neural network.
1. Install Dependencies
Ensure that both LightGBM and PyTorch are installed in your environment. You can install them via pip:
pip install lightgbm torch
2. Load and Preprocess the Dataset
We will use the California Housing Dataset from sklearn.datasets
, which contains features such as the median income, population, and other factors affecting housing prices.
import numpy as np
import lightgbm as lgb
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Load the dataset
data = fetch_california_housing()
X, y = data.data, data.target
# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Normalize the features using StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
3. Train the LightGBM Model
We will train a LightGBM model using LGBMRegressor
to predict house prices. After training, we’ll convert the predictions to PyTorch tensors.
# Initialize the LightGBM regressor
model = lgb.LGBMRegressor(
n_estimators=1000, # Number of boosting rounds
learning_rate=0.05, # Learning rate
max_depth=7, # Maximum depth of trees
random_state=42
)
# Train the LightGBM model
model.fit(X_train_scaled, y_train, eval_set=[(X_test_scaled, y_test)], early_stopping_rounds=10, verbose=False)
# Make predictions on the test set
y_pred_lgbm = model.predict(X_test_scaled)
print(f"LightGBM model predictions (first 5): {y_pred_lgbm[:5]}")
In this step:
- We trained the LightGBM model on the normalized dataset and generated predictions on the test set.
- The model is set to stop training early if performance does not improve for 10 consecutive rounds.
4. Convert LightGBM Predictions to PyTorch Tensors
Next, we convert the predictions from LightGBM into PyTorch tensors so we can further integrate them into a PyTorch deep learning model.
import torch
# Convert LightGBM predictions and test labels to PyTorch tensors
y_pred_tensor = torch.tensor(y_pred_lgbm, dtype=torch.float32)
y_test_tensor = torch.tensor(y_test, dtype=torch.float32)
# Print the PyTorch tensor
print(f"PyTorch Tensor (first 5): {y_pred_tensor[:5]}")
Explanation:
- We use torch.tensor to convert the NumPy arrays from LightGBM’s predictions into PyTorch tensors.
- This step allows us to integrate the predictions into a PyTorch workflow.
5. Build a Neural Network in PyTorch
We will now create a simple neural network in PyTorch. This model will take the LightGBM predictions as input and fine-tune them through additional layers.
import torch.nn as nn
import torch.optim as optim
# Define a simple feed-forward neural network
class SimpleNN(nn.Module):
def __init__(self):
super(SimpleNN, self).__init__()
self.fc1 = nn.Linear(1, 64) # Input layer to first hidden layer
self.fc2 = nn.Linear(64, 32) # Second hidden layer
self.fc3 = nn.Linear(32, 1) # Output layer
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
# Initialize the neural network model
model_nn = SimpleNN()
# Define loss function and optimizer
criterion = nn.MSELoss()
optimizer = optim.Adam(model_nn.parameters(), lr=0.001)
Explanation:
- We defined a simple feed-forward neural network with two hidden layers.
- ReLU is used as the activation function for the hidden layers.
- The model is optimized using Adam, and the loss function is Mean Squared Error (MSE), which is suitable for regression.
6. Train the PyTorch Model
We will now fine-tune the LightGBM predictions by training the neural network on the output of the LightGBM model.
# Reshape LightGBM predictions to match the input shape expected by the neural network
y_pred_tensor = y_pred_tensor.view(-1, 1) # Reshaping to (n_samples, 1)
# Train the neural network model
num_epochs = 100
for epoch in range(num_epochs):
model_nn.train()
# Forward pass
outputs = model_nn(y_pred_tensor)
loss = criterion(outputs, y_test_tensor.view(-1, 1)) # Reshaping y_test_tensor
# Backward pass and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print loss every 10 epochs
if (epoch+1) % 10 == 0:
print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.4f}')
Explanation:
- We reshape the LightGBM predictions to the correct shape for the neural network.
- The network is trained for 100 epochs using MSELoss, and the loss is printed every 10 epochs.
7. Evaluate the Model
After training the neural network, we can evaluate the performance of the combined model using the Mean Absolute Error (MAE) and R-Squared () metrics.
from sklearn.metrics import mean_absolute_error, r2_score
# Make predictions using the neural network
model_nn.eval() # Set the model to evaluation mode
with torch.no_grad():
nn_predictions = model_nn(y_pred_tensor).squeeze()
# Convert back to NumPy for evaluation
nn_predictions_np = nn_predictions.numpy()
# Evaluate the combined model
mae = mean_absolute_error(y_test, nn_predictions_np)
r2 = r2_score(y_test, nn_predictions_np)
print(f"Combined Model - MAE: {mae:.2f}, R-Squared: {r2:.2f}")
Interpretation:
- MAE: Measures the average magnitude of the errors in predictions. A lower value indicates a better fit.
- R-Squared (): Measures how well the model explains the variance in the target variable. A value closer to 1 indicates a better fit.
Summary
In this example, we demonstrated how to integrate LightGBM with PyTorch to leverage the strengths of both frameworks. We followed these steps:
- Loaded and preprocessed the data using scikit-learn.
- Trained a LightGBM model for predicting house prices.
- Converted LightGBM predictions to PyTorch tensors.
- Built a neural network in PyTorch to fine-tune the predictions from LightGBM.
- Evaluated the combined model using MAE and R-Squared metrics.
LightGBM's efficient tree-based boosting combined with PyTorch's flexibility and deep learning capabilities can be a powerful approach for certain tasks. Let me know if you'd like to explore more or proceed with other sections!