XGBoost decision tree selection“Large data” work flows using pandasCreate xgboost Dmatrix in c++How to load a sparse dataset into XGBoost in Python?decision tree and maximum depthWhat is the data format for the lambdaMART in xgboost (Python version)?XGBoost plot_importance doesn't show feature namespython xgboost continue training on existing modelHow to get feature importance in Decision Tree?How to visualize catboost decision tree in python?Special decision tree

What would happen to a modern skyscraper if it rains micro blackholes?

meaning of に in 本当に?

Perform and show arithmetic with LuaLaTeX

Modeling an IP Address

Roll the carpet

Why is 150k or 200k jobs considered good when there's 300k+ births a month?

Alternative to sending password over mail?

Does detail obscure or enhance action?

RSA: Danger of using p to create q

What doth I be?

Paid for article while in US on F-1 visa?

Is it legal for company to use my work email to pretend I still work there?

What's the point of deactivating Num Lock on login screens?

how to check a propriety using r studio

Are the number of citations and number of published articles the most important criteria for a tenure promotion?

How is it possible to have an ability score that is less than 3?

Replacing matching entries in one column of a file by another column from a different file

Why does Kotter return in Welcome Back Kotter?

"You are your self first supporter", a more proper way to say it

Can I make popcorn with any corn?

Does object always see its latest internal state irrespective of thread?

Can a monk's single staff be considered dual wielded, as per the Dual Wielder feat?

Codimension of non-flat locus

How do I deal with an unproductive colleague in a small company?



XGBoost decision tree selection


“Large data” work flows using pandasCreate xgboost Dmatrix in c++How to load a sparse dataset into XGBoost in Python?decision tree and maximum depthWhat is the data format for the lambdaMART in xgboost (Python version)?XGBoost plot_importance doesn't show feature namespython xgboost continue training on existing modelHow to get feature importance in Decision Tree?How to visualize catboost decision tree in python?Special decision tree






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I have a question regarding which decision tree should I choose from XGBoost.



I will use the following code as an example.



#import packages
import xgboost as xgb
import matplotlib.pyplot as plt

# create DMatrix
df_dmatrix = xgb.DMatrix(data = X, label = y)

# set up parameter dictionary
params = "objective":"reg:linear", "max_depth":2

#train the model
xg_reg = xgb.train(params = params, dtrain = df_dmatrix, num_boost_round = 10)

#plot the tree
xgb.plot_tree(xg_reg, num_trees = n) # my question related to here


I create 10 trees in the xg_reg model, and I can plot any one of them by setting n in my last code equal to the index of the tree.



My question is: how can I know which tree best explains the dataset? Is it always the last one? Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?










share|improve this question






























    1















    I have a question regarding which decision tree should I choose from XGBoost.



    I will use the following code as an example.



    #import packages
    import xgboost as xgb
    import matplotlib.pyplot as plt

    # create DMatrix
    df_dmatrix = xgb.DMatrix(data = X, label = y)

    # set up parameter dictionary
    params = "objective":"reg:linear", "max_depth":2

    #train the model
    xg_reg = xgb.train(params = params, dtrain = df_dmatrix, num_boost_round = 10)

    #plot the tree
    xgb.plot_tree(xg_reg, num_trees = n) # my question related to here


    I create 10 trees in the xg_reg model, and I can plot any one of them by setting n in my last code equal to the index of the tree.



    My question is: how can I know which tree best explains the dataset? Is it always the last one? Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?










    share|improve this question


























      1












      1








      1


      1






      I have a question regarding which decision tree should I choose from XGBoost.



      I will use the following code as an example.



      #import packages
      import xgboost as xgb
      import matplotlib.pyplot as plt

      # create DMatrix
      df_dmatrix = xgb.DMatrix(data = X, label = y)

      # set up parameter dictionary
      params = "objective":"reg:linear", "max_depth":2

      #train the model
      xg_reg = xgb.train(params = params, dtrain = df_dmatrix, num_boost_round = 10)

      #plot the tree
      xgb.plot_tree(xg_reg, num_trees = n) # my question related to here


      I create 10 trees in the xg_reg model, and I can plot any one of them by setting n in my last code equal to the index of the tree.



      My question is: how can I know which tree best explains the dataset? Is it always the last one? Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?










      share|improve this question
















      I have a question regarding which decision tree should I choose from XGBoost.



      I will use the following code as an example.



      #import packages
      import xgboost as xgb
      import matplotlib.pyplot as plt

      # create DMatrix
      df_dmatrix = xgb.DMatrix(data = X, label = y)

      # set up parameter dictionary
      params = "objective":"reg:linear", "max_depth":2

      #train the model
      xg_reg = xgb.train(params = params, dtrain = df_dmatrix, num_boost_round = 10)

      #plot the tree
      xgb.plot_tree(xg_reg, num_trees = n) # my question related to here


      I create 10 trees in the xg_reg model, and I can plot any one of them by setting n in my last code equal to the index of the tree.



      My question is: how can I know which tree best explains the dataset? Is it always the last one? Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?







      python decision-tree xgboost






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 22 at 4:32









      MarredCheese

      3,13112239




      3,13112239










      asked Mar 21 at 22:48









      Kaiyi ZouKaiyi Zou

      104




      104






















          1 Answer
          1






          active

          oldest

          votes


















          0















          My question is how I can know which tree explains the data set best?




          XGBoost is an implementation of Gradient Boosted Decision Trees (GBDT). Roughly speaking, GBDT is a sequence of trees each one improving the prediction of the previous using residual boosting. So the tree that explains the data best is the n - 1th.



          You can read more about GBDT here




          Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?




          All the trees are trained with the same base features, they just get residuals added at every boosting iteration. So you could not determine the best tree in this way. In this video there is an intuitive explanation of residuals.






          share|improve this answer


















          • 1





            Thanks. SO is it true that the more rounds we let the model iterate, the better the tree is? So we need to consider the trade-off between time spent on training the model and the accuracy of the model.

            – Kaiyi Zou
            Mar 22 at 20:34











          • No is not true. If you keep training after a while you will start to overfit and your model will lose predictive power because it's becoming worse and worse at generalizing. You can read more here en.wikipedia.org/wiki/Overfitting

            – Alessandro Solbiati
            Mar 22 at 20:58











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55290375%2fxgboost-decision-tree-selection%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0















          My question is how I can know which tree explains the data set best?




          XGBoost is an implementation of Gradient Boosted Decision Trees (GBDT). Roughly speaking, GBDT is a sequence of trees each one improving the prediction of the previous using residual boosting. So the tree that explains the data best is the n - 1th.



          You can read more about GBDT here




          Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?




          All the trees are trained with the same base features, they just get residuals added at every boosting iteration. So you could not determine the best tree in this way. In this video there is an intuitive explanation of residuals.






          share|improve this answer


















          • 1





            Thanks. SO is it true that the more rounds we let the model iterate, the better the tree is? So we need to consider the trade-off between time spent on training the model and the accuracy of the model.

            – Kaiyi Zou
            Mar 22 at 20:34











          • No is not true. If you keep training after a while you will start to overfit and your model will lose predictive power because it's becoming worse and worse at generalizing. You can read more here en.wikipedia.org/wiki/Overfitting

            – Alessandro Solbiati
            Mar 22 at 20:58















          0















          My question is how I can know which tree explains the data set best?




          XGBoost is an implementation of Gradient Boosted Decision Trees (GBDT). Roughly speaking, GBDT is a sequence of trees each one improving the prediction of the previous using residual boosting. So the tree that explains the data best is the n - 1th.



          You can read more about GBDT here




          Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?




          All the trees are trained with the same base features, they just get residuals added at every boosting iteration. So you could not determine the best tree in this way. In this video there is an intuitive explanation of residuals.






          share|improve this answer


















          • 1





            Thanks. SO is it true that the more rounds we let the model iterate, the better the tree is? So we need to consider the trade-off between time spent on training the model and the accuracy of the model.

            – Kaiyi Zou
            Mar 22 at 20:34











          • No is not true. If you keep training after a while you will start to overfit and your model will lose predictive power because it's becoming worse and worse at generalizing. You can read more here en.wikipedia.org/wiki/Overfitting

            – Alessandro Solbiati
            Mar 22 at 20:58













          0












          0








          0








          My question is how I can know which tree explains the data set best?




          XGBoost is an implementation of Gradient Boosted Decision Trees (GBDT). Roughly speaking, GBDT is a sequence of trees each one improving the prediction of the previous using residual boosting. So the tree that explains the data best is the n - 1th.



          You can read more about GBDT here




          Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?




          All the trees are trained with the same base features, they just get residuals added at every boosting iteration. So you could not determine the best tree in this way. In this video there is an intuitive explanation of residuals.






          share|improve this answer














          My question is how I can know which tree explains the data set best?




          XGBoost is an implementation of Gradient Boosted Decision Trees (GBDT). Roughly speaking, GBDT is a sequence of trees each one improving the prediction of the previous using residual boosting. So the tree that explains the data best is the n - 1th.



          You can read more about GBDT here




          Or should I determine which features I want to include in the tree, and then choose the tree which contains the features?




          All the trees are trained with the same base features, they just get residuals added at every boosting iteration. So you could not determine the best tree in this way. In this video there is an intuitive explanation of residuals.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 22 at 4:10









          Alessandro SolbiatiAlessandro Solbiati

          546411




          546411







          • 1





            Thanks. SO is it true that the more rounds we let the model iterate, the better the tree is? So we need to consider the trade-off between time spent on training the model and the accuracy of the model.

            – Kaiyi Zou
            Mar 22 at 20:34











          • No is not true. If you keep training after a while you will start to overfit and your model will lose predictive power because it's becoming worse and worse at generalizing. You can read more here en.wikipedia.org/wiki/Overfitting

            – Alessandro Solbiati
            Mar 22 at 20:58












          • 1





            Thanks. SO is it true that the more rounds we let the model iterate, the better the tree is? So we need to consider the trade-off between time spent on training the model and the accuracy of the model.

            – Kaiyi Zou
            Mar 22 at 20:34











          • No is not true. If you keep training after a while you will start to overfit and your model will lose predictive power because it's becoming worse and worse at generalizing. You can read more here en.wikipedia.org/wiki/Overfitting

            – Alessandro Solbiati
            Mar 22 at 20:58







          1




          1





          Thanks. SO is it true that the more rounds we let the model iterate, the better the tree is? So we need to consider the trade-off between time spent on training the model and the accuracy of the model.

          – Kaiyi Zou
          Mar 22 at 20:34





          Thanks. SO is it true that the more rounds we let the model iterate, the better the tree is? So we need to consider the trade-off between time spent on training the model and the accuracy of the model.

          – Kaiyi Zou
          Mar 22 at 20:34













          No is not true. If you keep training after a while you will start to overfit and your model will lose predictive power because it's becoming worse and worse at generalizing. You can read more here en.wikipedia.org/wiki/Overfitting

          – Alessandro Solbiati
          Mar 22 at 20:58





          No is not true. If you keep training after a while you will start to overfit and your model will lose predictive power because it's becoming worse and worse at generalizing. You can read more here en.wikipedia.org/wiki/Overfitting

          – Alessandro Solbiati
          Mar 22 at 20:58



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55290375%2fxgboost-decision-tree-selection%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현