What are the pros and cons between get_dummies (Pandas) and OneHotEncoder (Scikit-learn)?
Solution 1
OneHotEncoder
cannot process string values directly. If your nominal features are strings, then you need to first map them into integers.
pandas.get_dummies
is kind of the opposite. By default, it only converts string columns into one-hot representation, unless columns are specified.
Solution 2
For machine learning, you almost definitely want to use sklearn.OneHotEncoder
. For other tasks like simple analyses, you might be able to use pd.get_dummies
, which is a bit more convenient.
Note that sklearn.OneHotEncoder
has been updated in the latest version so that it does accept strings for categorical variables, as well as integers.
The crux of it is that the sklearn
encoder creates a function which persists and can then be applied to new data sets which use the same categorical variables, with consistent results.
from sklearn.preprocessing import OneHotEncoder
# Create the encoder.
encoder = OneHotEncoder(handle_unknown="ignore")
encoder.fit(X_train) # Assume for simplicity all features are categorical.
# Apply the encoder.
X_train = encoder.transform(X_train)
X_test = encoder.transform(X_test)
Note how we apply the same encoder we created via X_train
to the new data set X_test
.
Consider what happens if X_test
contains different levels than X_train
for one of its variables. For example, let's say X_train["color"]
contains only "red"
and "green"
, but in addition to those, X_test["color"]
sometimes contains "blue"
.
If we use pd.get_dummies
, X_test
will end up with an additional "color_blue"
column which X_train
doesn't have, and the inconsistency will probably break our code later on, especially if we are feeding X_test
to an sklearn
model which we trained on X_train
.
And if we want to process the data like this in production, where we're receiving a single example at a time, pd.get_dummies
won't be of use.
With sklearn.OneHotEncoder
on the other hand, once we've created the encoder, we can reuse it to produce the same output every time, with columns only for "red"
and "green"
. And we can explicitly control what happens when it encounters the new level "blue"
: if we think that's impossible, then we can tell it to throw an error with handle_unknown="error"
; otherwise we can tell it to continue and simply set the red and green columns to 0, with handle_unknown="ignore"
.
Solution 3
I really like Carl's answer and upvoted it. I will just expand Carl's example a bit so that more people hopefully will appreciate that pd.get_dummies can handle unknown. The two examples below shows that pd.get_dummies can accomplish the same thing in handling unknown as OHE .
# data is from @dzieciou's comment above
>>> data =pd.DataFrame(pd.Series(['good','bad','worst','good', 'good', 'bad']))
# new_data has two values that data does not have.
>>> new_data= pd.DataFrame(
pd.Series(['good','bad','worst','good', 'good', 'bad','excellent', 'perfect']))
Using pd.get_dummies
>>> df = pd.get_dummies(data)
>>> col_list = df.columns.tolist()
>>> print(df)
0_bad 0_good 0_worst
0 0 1 0
1 1 0 0
2 0 0 1
3 0 1 0
4 0 1 0
5 1 0 0
6 0 0 0
7 0 0 0
>>> new_df = pd.get_dummies(new_data)
# handle unknow by using .reindex and .fillna()
>>> new_df = new_df.reindex(columns=col_list).fillna(0.00)
>>> print(new_df)
# 0_bad 0_good 0_worst
# 0 0 1 0
# 1 1 0 0
# 2 0 0 1
# 3 0 1 0
# 4 0 1 0
# 5 1 0 0
# 6 0 0 0
# 7 0 0 0
Using OneHotEncoder
>>> encoder = OneHotEncoder(handle_unknown="ignore", sparse=False)
>>> encoder.fit(data)
>>> encoder.transform(new_data)
# array([[0., 1., 0.],
# [1., 0., 0.],
# [0., 0., 1.],
# [0., 1., 0.],
# [0., 1., 0.],
# [1., 0., 0.],
# [0., 0., 0.],
# [0., 0., 0.]])
Solution 4
why wouldn't you just cache or save the columns as variable col_list from the resulting get_dummies then use pd.reindex to align the train vs test datasets.... example:
df = pd.get_dummies(data)
col_list = df.columns.tolist()
new_df = pd.get_dummies(new_data)
new_df = new_df.reindex(columns=col_list).fillna(0.00)
O.rka
I am an academic researcher studying machine-learning and microorganisms
Updated on April 20, 2020Comments
-
O.rka about 4 years
I'm learning different methods to convert categorical variables to numeric for machine-learning classifiers. I came across the
pd.get_dummies
method andsklearn.preprocessing.OneHotEncoder()
and I wanted to see how they differed in terms of performance and usage.I found a tutorial on how to use
OneHotEncoder()
on https://xgdgsc.wordpress.com/2015/03/20/note-on-using-onehotencoder-in-scikit-learn-to-work-on-categorical-features/ since thesklearn
documentation wasn't too helpful on this feature. I have a feeling I'm not doing it correctly...butCan some explain the pros and cons of using
pd.dummies
oversklearn.preprocessing.OneHotEncoder()
and vice versa? I know thatOneHotEncoder()
gives you a sparse matrix but other than that I'm not sure how it is used and what the benefits are over thepandas
method. Am I using it inefficiently?import pandas as pd import numpy as np from sklearn.datasets import load_iris sns.set() %matplotlib inline #Iris Plot iris = load_iris() n_samples, m_features = iris.data.shape #Load Data X, y = iris.data, iris.target D_target_dummy = dict(zip(np.arange(iris.target_names.shape[0]), iris.target_names)) DF_data = pd.DataFrame(X,columns=iris.feature_names) DF_data["target"] = pd.Series(y).map(D_target_dummy) #sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) \ #0 5.1 3.5 1.4 0.2 #1 4.9 3.0 1.4 0.2 #2 4.7 3.2 1.3 0.2 #3 4.6 3.1 1.5 0.2 #4 5.0 3.6 1.4 0.2 #5 5.4 3.9 1.7 0.4 DF_dummies = pd.get_dummies(DF_data["target"]) #setosa versicolor virginica #0 1 0 0 #1 1 0 0 #2 1 0 0 #3 1 0 0 #4 1 0 0 #5 1 0 0 from sklearn.preprocessing import OneHotEncoder, LabelEncoder def f1(DF_data): Enc_ohe, Enc_label = OneHotEncoder(), LabelEncoder() DF_data["Dummies"] = Enc_label.fit_transform(DF_data["target"]) DF_dummies2 = pd.DataFrame(Enc_ohe.fit_transform(DF_data[["Dummies"]]).todense(), columns = Enc_label.classes_) return(DF_dummies2) %timeit pd.get_dummies(DF_data["target"]) #1000 loops, best of 3: 777 µs per loop %timeit f1(DF_data) #100 loops, best of 3: 2.91 ms per loop
-
Ankit Seth over 6 yearsOther than that, is one efficient over other?
-
Bs He over 5 yearsupdate,
OneHotEncoder
can not be applied on strings as well in 0.20.0 version. -
dzieciou almost 5 years@BsHe No longer true in sklearn 0.20.3:
OneHotEncoder(sparse=False).fit_transform(pd.DataFrame(pd.Series(['good','bad','worst','good', 'good', 'bad'])))
works, which meansOneHotEncoder
can be applied on stirngs. -
Bs He almost 5 years@dzieciou Good for updating.
-
barker almost 5 yearsI believe this answer has far greater impact than the accepted. The real magic is handling unknown categorical features which are bound to pop up in production.
-
gosuto over 4 yearsHow does this answer the question?
-
Carl over 4 yearsmore to refute the previous comment that Sklearn OHE is supperior because of handle_unknown. The same can be accomplished using pandas reindex.
-
Chiraz BenAbdelkader over 4 yearsI think this is a better, more complete answer than the accepted answer.
-
Mint over 4 yearsThere can be a sneaky problem with using get_dummies except as a one off run. What happens if you have drop_first=True and the next sample doesn't include the dropped value?
-
Mint over 4 yearsCan you please expand your answer to include an example with drop_first =True, and then also show new data that doesn't include the dropped value.
-
gented over 4 yearsYou cannot encode new unseen data with
pd.get_dummies
. -
dami.max about 4 yearsYes. IMHO, this is a better answer than the accepted answer.
-
Binod Mathews over 3 yearsYup . This answer definitely explains better why one_hot_encoder might be better along with a clear example
-
Dr Nisha Arora over 3 yearsAdditional note; there are many other encoders in sklearn. When to use which, depends upon the data. stackoverflow.com/a/63822728/5114585 might help you understand some common encoder's uses
-
Ernesto Lopez Fune over 3 yearsWhat about if the categorical variables one needs to transform contain missing values?
-
code_conundrum over 2 years@ErnestoLopezFune I think we can handle the missing value of categorical before the OneHotEncoder by imputation or with pipelines easiest way. # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) It will replace with the "most_frequent" value.