Why is (rf)clf feature_importance giving importance to a feature where all values are the same?scikit feature importance selection experiencesFeature importance based on extremely randomize trees and feature redundancyRandomForestClassifier vs ExtraTreesClassifier in scikit learnparallel generation of random forests using scikit-learnpython: How to get real feature name from feature_importancesGetting Random Forest feature_importances_ from OneVsRestClassifier for Multi-label classificationAUC-base Features Importance using Random ForestDoubts regarding decision trees and random forest classifier (scikit)Uncovering how changes in feature importance impact label predictionPython What does the negative contributions in treeinterpreter mean?

How do I say "Outdoor Pre-show"

Did Apollo leave poop on the moon?

Can I enter a rental property without giving notice if I'm afraid a tenant may be hurt?

What are these mathematical groups in U.S. universities?

Would the Elder Wand have been able to destroy a Horcrux?

How can I make Ubuntu run well (including with wifi) on a 32-bit machine?

Validation and verification of mathematical models

How to avoid ci-driven development..?

ESTA declined to the US

Why do proponents of guns oppose gun competency tests?

What does Fisher mean by this quote?

How does the oscilloscope trigger really work?

Should I take out a personal loan to pay off credit card debt?

What can make Linux unresponsive for minutes when browsing certain websites?

Is this cheap "air conditioner" able to cool a room?

How to help new students accept function notation

Does bottle color affect mold growth?

Best way to explain to my boss that I cannot attend a team summit because it is on Rosh Hashana or any other Jewish Holiday

Contractions using simplewick

What does VB stand for?

Why should I "believe in" weak solutions to PDEs?

Purchased new computer from DELL with pre-installed Ubuntu. Won't boot. Should assume its an error from DELL?

Why does putting a dot after the URL remove login information?

Short story about a teenager who has his brain replaced with a microchip (Psychological Horror)



Why is (rf)clf feature_importance giving importance to a feature where all values are the same?


scikit feature importance selection experiencesFeature importance based on extremely randomize trees and feature redundancyRandomForestClassifier vs ExtraTreesClassifier in scikit learnparallel generation of random forests using scikit-learnpython: How to get real feature name from feature_importancesGetting Random Forest feature_importances_ from OneVsRestClassifier for Multi-label classificationAUC-base Features Importance using Random ForestDoubts regarding decision trees and random forest classifier (scikit)Uncovering how changes in feature importance impact label predictionPython What does the negative contributions in treeinterpreter mean?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I am comparing multi-classification with Random Forests and CART in scikit-learn.



Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:




Feature ranking:



  1. feature 3 (0.437165)

  2. feature 2 (0.216415)

  3. feature 6 (0.102238)

  4. feature 5 (0.084897)

  5. feature 1 (0.064624)

  6. feature 4 (0.059332)

  7. feature 0 (0.035328)



CART feature_importance output:




Feature ranking:



  1. feature 3 (0.954666)

  2. feature 6 (0.014117)

  3. feature 0 (0.011529)

  4. feature 1 (0.010586)

  5. feature 2 (0.006785)

  6. feature 4 (0.002204)

  7. feature 5 (0.000112)



In every row, feature 4 has the same value. Same is for feature 6.



Here is the code



Random Forest



importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


CART



importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


I would except that the importance is like:




  1. feature 6 (0.000000)

  2. feature 4 (0.000000)



When I just don't use that two features, my models overfit.










share|improve this question





















  • 1





    I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.

    – CoMartel
    Mar 27 at 7:48











  • When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out

    – thestruggleisreal
    Mar 27 at 15:01

















0















I am comparing multi-classification with Random Forests and CART in scikit-learn.



Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:




Feature ranking:



  1. feature 3 (0.437165)

  2. feature 2 (0.216415)

  3. feature 6 (0.102238)

  4. feature 5 (0.084897)

  5. feature 1 (0.064624)

  6. feature 4 (0.059332)

  7. feature 0 (0.035328)



CART feature_importance output:




Feature ranking:



  1. feature 3 (0.954666)

  2. feature 6 (0.014117)

  3. feature 0 (0.011529)

  4. feature 1 (0.010586)

  5. feature 2 (0.006785)

  6. feature 4 (0.002204)

  7. feature 5 (0.000112)



In every row, feature 4 has the same value. Same is for feature 6.



Here is the code



Random Forest



importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


CART



importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


I would except that the importance is like:




  1. feature 6 (0.000000)

  2. feature 4 (0.000000)



When I just don't use that two features, my models overfit.










share|improve this question





















  • 1





    I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.

    – CoMartel
    Mar 27 at 7:48











  • When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out

    – thestruggleisreal
    Mar 27 at 15:01













0












0








0


0






I am comparing multi-classification with Random Forests and CART in scikit-learn.



Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:




Feature ranking:



  1. feature 3 (0.437165)

  2. feature 2 (0.216415)

  3. feature 6 (0.102238)

  4. feature 5 (0.084897)

  5. feature 1 (0.064624)

  6. feature 4 (0.059332)

  7. feature 0 (0.035328)



CART feature_importance output:




Feature ranking:



  1. feature 3 (0.954666)

  2. feature 6 (0.014117)

  3. feature 0 (0.011529)

  4. feature 1 (0.010586)

  5. feature 2 (0.006785)

  6. feature 4 (0.002204)

  7. feature 5 (0.000112)



In every row, feature 4 has the same value. Same is for feature 6.



Here is the code



Random Forest



importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


CART



importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


I would except that the importance is like:




  1. feature 6 (0.000000)

  2. feature 4 (0.000000)



When I just don't use that two features, my models overfit.










share|improve this question
















I am comparing multi-classification with Random Forests and CART in scikit-learn.



Two of my features (feature 4 and feature 6) are irrelevant for the classification because all the values are the same.
But output of the feature_importances of the RandomForestClassifier is the following:




Feature ranking:



  1. feature 3 (0.437165)

  2. feature 2 (0.216415)

  3. feature 6 (0.102238)

  4. feature 5 (0.084897)

  5. feature 1 (0.064624)

  6. feature 4 (0.059332)

  7. feature 0 (0.035328)



CART feature_importance output:




Feature ranking:



  1. feature 3 (0.954666)

  2. feature 6 (0.014117)

  3. feature 0 (0.011529)

  4. feature 1 (0.010586)

  5. feature 2 (0.006785)

  6. feature 4 (0.002204)

  7. feature 5 (0.000112)



In every row, feature 4 has the same value. Same is for feature 6.



Here is the code



Random Forest



importances = rfclf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


CART



importances = clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfclf.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]

# Print the feature ranking
print("Feature ranking:")

for f in range(x.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))


I would except that the importance is like:




  1. feature 6 (0.000000)

  2. feature 4 (0.000000)



When I just don't use that two features, my models overfit.







python machine-learning scikit-learn random-forest feature-selection






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 27 at 15:00







thestruggleisreal

















asked Mar 27 at 6:00









thestruggleisrealthestruggleisreal

12012 bronze badges




12012 bronze badges










  • 1





    I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.

    – CoMartel
    Mar 27 at 7:48











  • When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out

    – thestruggleisreal
    Mar 27 at 15:01












  • 1





    I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.

    – CoMartel
    Mar 27 at 7:48











  • When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out

    – thestruggleisreal
    Mar 27 at 15:01







1




1





I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.

– CoMartel
Mar 27 at 7:48





I think it means RF didn't find a more significant feature to "split", which probably means features with lesser importance are just noise. Did you set a depth? You should try depth=None to see if it keeps using features 4 and 6. Another thing to try would be to just keep features 2 & 3 and see if the score changes.

– CoMartel
Mar 27 at 7:48













When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out

– thestruggleisreal
Mar 27 at 15:01





When I just don't use that two features, my models overfit. @CoMartel but I will try your recommendations out

– thestruggleisreal
Mar 27 at 15:01












1 Answer
1






active

oldest

votes


















0














You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.



Any kind of feature importance calculation must be done on a robust model to be meaningful.






share|improve this answer
























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55370709%2fwhy-is-rfclf-feature-importance-giving-importance-to-a-feature-where-all-value%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.



    Any kind of feature importance calculation must be done on a robust model to be meaningful.






    share|improve this answer





























      0














      You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.



      Any kind of feature importance calculation must be done on a robust model to be meaningful.






      share|improve this answer



























        0












        0








        0







        You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.



        Any kind of feature importance calculation must be done on a robust model to be meaningful.






        share|improve this answer













        You need to set a limit to the depth of your trees. I recommend doing a gridsearch over min_samples_leaf=[0.001, 0.1] - trying between 0.1% to 10% in each leaf.



        Any kind of feature importance calculation must be done on a robust model to be meaningful.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Mar 29 at 17:02









        jonnorjonnor

        1,2611 gold badge8 silver badges14 bronze badges




        1,2611 gold badge8 silver badges14 bronze badges





















            Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







            Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55370709%2fwhy-is-rfclf-feature-importance-giving-importance-to-a-feature-where-all-value%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

            SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

            은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현