Logistic regression on One-hot encoding
Solution 1
Consider the following approach:
first let's one-hot-encode all non-numeric columns:
In [220]: from sklearn.preprocessing import LabelEncoder
In [221]: x = df.select_dtypes(exclude=['number']) \
.apply(LabelEncoder().fit_transform) \
.join(df.select_dtypes(include=['number']))
In [228]: x
Out[228]:
status country city datetime amount
601766 0 0 1 1.453916e+09 4.5
669244 0 1 0 1.454109e+09 6.9
now we can use LinearRegression
classifier:
In [230]: classifier.fit(x.drop('status',1), x['status'])
Out[230]: LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
Solution 2
To do a one-hot encoding in a scikit-learn project, you may find it cleaner to use the scikit-learn-contrib project category_encoders: https://github.com/scikit-learn-contrib/categorical-encoding, which includes many common categorical variable encoding methods including one-hot.
Comments
-
Mornor almost 2 years
I have a Dataframe (
data
) for which the head looks like the following:status datetime country amount city 601766 received 1.453916e+09 France 4.5 Paris 669244 received 1.454109e+09 Italy 6.9 Naples
I would like to predict the
status
givendatetime, country, amount
andcity
Since
status, country, city
are string, I one-hot-encoded them:one_hot = pd.get_dummies(data['country']) data = data.drop(item, axis=1) # Drop the column as it is now one_hot_encoded data = data.join(one_hot)
I then create a simple LinearRegression model and fit my data:
y_data = data['status'] classifier = LinearRegression(n_jobs = -1) X_train, X_test, y_train, y_test = train_test_split(data, y_data, test_size=0.2) columns = X_train.columns.tolist() classifier.fit(X_train[columns], y_train)
But I got the following error:
could not convert string to float: 'received'
I have the feeling I miss something here and I would like to have some inputs on how to proceed. Thank you for having read so far!
-
m-dz about 3 yearsTry
y_data = data['status'] == 'received'
, I am pretty sureLinearRegression
is expecting a numeric/boolean variable here.
-
-
Mornor almost 7 yearsThanks a lot! Would you mind explaining why my initial solution did no work out?
-
MaxU - stop genocide of UA almost 7 years@Mornor, you are welcome. I guess
X_train[columns]
and/ory_data
has somestring
columns, hencecould not convert string to float: 'received'
-
Mornor almost 7 yearsI would like to add that your answer is partially correct. Indeed, it only LabelEncode the strings, and not one_hot encode them. That will create false results since some string will worth "more" than others.
-
Juan Acevedo over 5 yearsIf anyone is wondering what Mornor means, this is because label encode will be numerical values. Ex: France = 0, Italy = 1, etc. That means that some cities are worth more than others. With one-hot encoding each city has the same value: Ex: France = [1, 0], Italy = [0,1]. Also don't forget to the dummy variable trap algosome.com/articles/dummy-variable-trap-regression.html.