How to save & load xgboost model?

107,342

Solution 1

Here is how I solved the problem:

import pickle
file_name = "xgb_reg.pkl"

# save
pickle.dump(xgb_model, open(file_name, "wb"))

# load
xgb_model_loaded = pickle.load(open(file_name, "rb"))

# test
ind = 1
test = X_val[ind]
xgb_model_loaded.predict(test)[0] == xgb_model.predict(test)[0]

Out[1]: True

Solution 2

Both functions save_model and dump_model save the model, the difference is that in dump_model you can save feature name and save tree in text format.

The load_model will work with model from save_model. The model from dump_model can be used for example with xgbfi.

During loading the model, you need to specify the path where your models is saved. In the example bst.load_model("model.bin") model is loaded from file model.bin - it is just a name of file with model. Good luck!

EDIT: From Xgboost documentation (for version 1.3.3), the dump_model() should be used for saving the model for further interpretation. For saving and loading the model the save_model() and load_model() should be used. Please check the docs for more details.

There is also a difference between Learning API and Scikit-Learn API of Xgboost. The latter saves the best_ntree_limit variable which is set during the training with early stopping. You can read details in my article How to save and load Xgboost in Python?

The save_model() method recognize the format of the file name, if *.json is specified, then model is saved in JSON, otherwise it is text file.

Solution 3

An easy way of saving and loading a xgboost model is with joblib library.

import joblib
#save model
joblib.dump(xgb, filename) 

#load saved model
xgb = joblib.load(filename)

Solution 4

Don't use pickle or joblib as that may introduces dependencies on xgboost version. The canonical way to save and restore models is by load_model and save_model.

If you’d like to store or archive your model for long-term storage, use save_model (Python) and xgb.save (R).

This is the relevant documentation for the latest versions of XGBoost. It also explains the difference between dump_model and save_model.

Note that you can serialize/de-serialize your models as json by specifying json as the extension when using bst.save_model. If the speed of saving and restoring the model is not important for you, this is very convenient, as it allows you to do proper version control of the model since it's a simple text file.

Solution 5

If you are using the sklearn api you can use the following:


xgb_model_latest = xgboost.XGBClassifier() # or which ever sklearn booster you're are using

xgb_model_latest.load_model("model.json") # or model.bin if you are using binary format and not the json

If you used the above booster method for loading, you will get the xgboost booster within the python api not the sklearn booster in the sklearn api.

So yeah, this seems to be the most pythonic way to load in a saved xgboost model data if you are using the sklearn api.

Share:
107,342
Pengju Zhao
Author by

Pengju Zhao

Updated on July 05, 2022

Comments

  • Pengju Zhao
    Pengju Zhao almost 2 years

    From the XGBoost guide:

    After training, the model can be saved.

    bst.save_model('0001.model')
    

    The model and its feature map can also be dumped to a text file.

    # dump model
    bst.dump_model('dump.raw.txt')
    # dump model with feature map
    bst.dump_model('dump.raw.txt', 'featmap.txt')
    

    A saved model can be loaded as follows:

    bst = xgb.Booster({'nthread': 4})  # init model
    bst.load_model('model.bin')  # load data
    

    My questions are following.

    1. What's the difference between save_model & dump_model?
    2. What's the difference between saving '0001.model' and 'dump.raw.txt','featmap.txt'?
    3. Why the model name for loading model.bin is different from the name to be saved 0001.model?
    4. Suppose that I trained two models: model_A and model_B. I wanted to save both models for future use. Which save & load function should I use? Could you help show the clear process?
  • oshribr
    oshribr over 5 years
    It's is not good if you want to load and save the model a cross languages. For example, you want to train the model in python but predict in java.
  • Abhilash Awasthi
    Abhilash Awasthi about 5 years
    This is the advised approach by XGB developers when you are using sklearn API of xgboost. XGBClassifier & XGBRegressor should be saved like this through pickle format.
  • Yi Lin Liu
    Yi Lin Liu almost 5 years
    It says joblib is deprecated on python3.8
  • Fontaine007
    Fontaine007 about 4 years
    If your model is saved in pickle, you may lose support when you upgrade xgboost version
  • dhanush-ai1990
    dhanush-ai1990 almost 4 years
    There will be incompatibility when you saved and load as pickle over different versions of Xgboost.
  • Galo Castillo
    Galo Castillo over 3 years
    I have used this method but not getting the parameters of the previously saved model when using xgb_model_latest.get_params().
  • Ravi
    Ravi about 3 years
    I have the same problem.
  • Robert Beatty
    Robert Beatty about 3 years
    Not having that issue. Default values are treated in a way I don't like. But I do get the params I put in. I'm using 1.2.1. Feel free to post your code to try and work shop this.
  • AmphotericLewisAcid
    AmphotericLewisAcid almost 3 years
    This is a legitimate use-case - for example, pickling is the official recommendation to save a sklearn pipeline. This necessarily means that if one has an sklearn pipeline containing an XGBoost model, they must end up pickling XGBoost. If the concern is that somewhere down the road, an update to XGBoost may break the pickle's behavior, that's why version-pinning (and unit testing) exists.