Why is (rf)clf feature_importance giving importance to a feature where all values are the same?scikit feature importance selection experiencesFeature importance based on extremely randomize trees and feature redundancyRandomForestClassifier vs ExtraTreesClassifier in scikit learnparallel generation of random forests using scikit-learnpython: How to get real feature name from feature_importancesGetting Random Forest feature_importances_ from OneVsRestClassifier for Multi-label classificationAUC-base Features Importance using Random ForestDoubts regarding decision trees and random forest classifier (scikit)Uncovering how changes in feature importance impact label predictionPython What does the negative contributions in treeinterpreter mean?
How do I say "Outdoor Pre-show"
Did Apollo leave poop on the moon?
Can I enter a rental property without giving notice if I'm afraid a tenant may be hurt?
What are these mathematical groups in U.S. universities?
Would the Elder Wand have been able to destroy a Horcrux?
How can I make Ubuntu run well (including with wifi) on a 32-bit machine?
Validation and verification of mathematical models
How to avoid ci-driven development..?
ESTA declined to the US
Why do proponents of guns oppose gun competency tests?
What does Fisher mean by this quote?
How does the oscilloscope trigger really work?
Should I take out a personal loan to pay off credit card debt?
What can make Linux unresponsive for minutes when browsing certain websites?
Is this cheap "air conditioner" able to cool a room?
How to help new students accept function notation
Does bottle color affect mold growth?
Best way to explain to my boss that I cannot attend a team summit because it is on Rosh Hashana or any other Jewish Holiday
Contractions using simplewick
What does VB stand for?
Why should I "believe in" weak solutions to PDEs?
Purchased new computer from DELL with pre-installed Ubuntu. Won't boot. Should assume its an error from DELL?
Why does putting a dot after the URL remove login information?
Short story about a teenager who has his brain replaced with a microchip (Psychological Horror)
Why is (rf)clf feature_importance giving importance to a feature where all values are the same?
scikit feature importance selection experiencesFeature importance based on extremely randomize trees and feature redundancyRandomForestClassifier vs ExtraTreesClassifier in scikit learnparallel generation of random forests using scikit-learnpython: How to get real feature name from feature_importancesGetting Random Forest feature_importances_ from OneVsRestClassifier for Multi-label classificationAUC-base Features Importance using Random ForestDoubts regarding decision trees and random forest classifier (scikit)Uncovering how changes in feature importance impact label predictionPython What does the negative contributions in treeinterpreter mean?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I am comparing multi-classification with Random Forests and CART in scikit-learn.
Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:
Feature ranking:
- feature 3 (0.437165)
- feature 2 (0.216415)
- feature 6 (0.102238)
- feature 5 (0.084897)
- feature 1 (0.064624)
- feature 4 (0.059332)
- feature 0 (0.035328)
CART feature_importance output:
Feature ranking:
- feature 3 (0.954666)
- feature 6 (0.014117)
- feature 0 (0.011529)
- feature 1 (0.010586)
- feature 2 (0.006785)
- feature 4 (0.002204)
- feature 5 (0.000112)
In every row, feature 4 has the same value. Same is for feature 6.
Here is the code
Random Forest
importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
CART
importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
I would except that the importance is like:
- feature 6 (0.000000)
- feature 4 (0.000000)
When I just don't use that two features, my models overfit.
python machine-learning scikit-learn random-forest feature-selection
add a comment |
I am comparing multi-classification with Random Forests and CART in scikit-learn.
Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:
Feature ranking:
- feature 3 (0.437165)
- feature 2 (0.216415)
- feature 6 (0.102238)
- feature 5 (0.084897)
- feature 1 (0.064624)
- feature 4 (0.059332)
- feature 0 (0.035328)
CART feature_importance output:
Feature ranking:
- feature 3 (0.954666)
- feature 6 (0.014117)
- feature 0 (0.011529)
- feature 1 (0.010586)
- feature 2 (0.006785)
- feature 4 (0.002204)
- feature 5 (0.000112)
In every row, feature 4 has the same value. Same is for feature 6.
Here is the code
Random Forest
importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
CART
importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
I would except that the importance is like:
- feature 6 (0.000000)
- feature 4 (0.000000)
When I just don't use that two features, my models overfit.
python machine-learning scikit-learn random-forest feature-selection
1
I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.
– CoMartel
Mar 27 at 7:48
When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out
– thestruggleisreal
Mar 27 at 15:01
add a comment |
I am comparing multi-classification with Random Forests and CART in scikit-learn.
Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:
Feature ranking:
- feature 3 (0.437165)
- feature 2 (0.216415)
- feature 6 (0.102238)
- feature 5 (0.084897)
- feature 1 (0.064624)
- feature 4 (0.059332)
- feature 0 (0.035328)
CART feature_importance output:
Feature ranking:
- feature 3 (0.954666)
- feature 6 (0.014117)
- feature 0 (0.011529)
- feature 1 (0.010586)
- feature 2 (0.006785)
- feature 4 (0.002204)
- feature 5 (0.000112)
In every row, feature 4 has the same value. Same is for feature 6.
Here is the code
Random Forest
importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
CART
importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
I would except that the importance is like:
- feature 6 (0.000000)
- feature 4 (0.000000)
When I just don't use that two features, my models overfit.
python machine-learning scikit-learn random-forest feature-selection
I am comparing multi-classification with Random Forests and CART in scikit-learn.
Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:
Feature ranking:
- feature 3 (0.437165)
- feature 2 (0.216415)
- feature 6 (0.102238)
- feature 5 (0.084897)
- feature 1 (0.064624)
- feature 4 (0.059332)
- feature 0 (0.035328)
CART feature_importance output:
Feature ranking:
- feature 3 (0.954666)
- feature 6 (0.014117)
- feature 0 (0.011529)
- feature 1 (0.010586)
- feature 2 (0.006785)
- feature 4 (0.002204)
- feature 5 (0.000112)
In every row, feature 4 has the same value. Same is for feature 6.
Here is the code
Random Forest
importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
CART
importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
I would except that the importance is like:
- feature 6 (0.000000)
- feature 4 (0.000000)
When I just don't use that two features, my models overfit.
python machine-learning scikit-learn random-forest feature-selection
python machine-learning scikit-learn random-forest feature-selection
edited Mar 27 at 15:00
thestruggleisreal
asked Mar 27 at 6:00
thestruggleisrealthestruggleisreal
12012 bronze badges
12012 bronze badges
1
I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.
– CoMartel
Mar 27 at 7:48
When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out
– thestruggleisreal
Mar 27 at 15:01
add a comment |
1
I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.
– CoMartel
Mar 27 at 7:48
When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out
– thestruggleisreal
Mar 27 at 15:01
1
1
I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.
– CoMartel
Mar 27 at 7:48
I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.
– CoMartel
Mar 27 at 7:48
When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out
– thestruggleisreal
Mar 27 at 15:01
When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out
– thestruggleisreal
Mar 27 at 15:01
add a comment |
1 Answer
1
active
oldest
votes
You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.
Any kind of feature importance calculation must be done on a robust model to be meaningful.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55370709%2fwhy-is-rfclf-feature-importance-giving-importance-to-a-feature-where-all-value%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.
Any kind of feature importance calculation must be done on a robust model to be meaningful.
add a comment |
You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.
Any kind of feature importance calculation must be done on a robust model to be meaningful.
add a comment |
You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.
Any kind of feature importance calculation must be done on a robust model to be meaningful.
You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.
Any kind of feature importance calculation must be done on a robust model to be meaningful.
answered Mar 29 at 17:02
jonnorjonnor
1,2611 gold badge8 silver badges14 bronze badges
1,2611 gold badge8 silver badges14 bronze badges
add a comment |
add a comment |
Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.
Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55370709%2fwhy-is-rfclf-feature-importance-giving-importance-to-a-feature-where-all-value%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.
– CoMartel
Mar 27 at 7:48
When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out
– thestruggleisreal
Mar 27 at 15:01