Sklearn GridSearch with PredefinedSplit scoring does not match a standalone classifier“Unwrapping” SklearnClassifier Object - NLTK PythonHow can I get randomized grid search to be more verbose? (seems stopped, but can't diagnose)Scikit learn GridSearchCV AUC performanceModel help using Scikit-learn when using GridSearchGridSearchCV error “Too many indices in the array”different roc_auc with XGBoost gridsearch scoring='roc_auc' and roc_auc_score?ValueError: could not convert string to float: 'n'Fitting sklearn GridSearchCV modelGridsearchCV and Kfold Cross validationRandom Forest and Imbalance

Does the United States guarantee any unique freedoms?

Why are Gatwick's runways too close together?

How quickly could a country build a tall concrete wall around a city?

Are there any differences in causality between linear and logistic regression?

Looking for a new job because of relocation - is it okay to tell the real reason?

Is it really ~648.69 km/s delta-v to "land" on the surface of the Sun?

Converting Piecewise function to C code

Which likelihood function is used in linear regression?

Why does Intel's Haswell chip allow FP multiplication to be twice as fast as addition?

Shabbat clothing on shabbat chazon

How can a surrogate pass on genes to a fertilized embryo?

What does "sardine box" mean?

As a 16 year old, how can I keep my money safe from my mother?

Team goes to lunch frequently, I do intermittent fasting but still want to socialize

How does The Fools Guild make its money?

Do other countries guarantee freedoms that the United States does not have?

Are there any financial disadvantages to living significantly "below your means"?

Can a one way NS Ticket be used as an OV-Chipkaart for P+R Parking in Amsterdam?

Could one become a successful researcher by writing some really good papers while being outside academia?

How do Mogwai reproduce?

Why "ch" pronunciation rule doesn't occur for words such as "durch", "manchmal"?

show stdout containing n with line breaks

Want to draw this commutative diagram

What are good ways to improve as a writer other than writing courses?



Sklearn GridSearch with PredefinedSplit scoring does not match a standalone classifier


“Unwrapping” SklearnClassifier Object - NLTK PythonHow can I get randomized grid search to be more verbose? (seems stopped, but can't diagnose)Scikit learn GridSearchCV AUC performanceModel help using Scikit-learn when using GridSearchGridSearchCV error “Too many indices in the array”different roc_auc with XGBoost gridsearch scoring='roc_auc' and roc_auc_score?ValueError: could not convert string to float: 'n'Fitting sklearn GridSearchCV modelGridsearchCV and Kfold Cross validationRandom Forest and Imbalance






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1















I am using sklearn GridSearch to find best parameters for random forest classification using a predefined validation set. The scores from the best estimator returned by GridSearch do not match the scores obtained by training a separate classifier with the same parameters.



The data split definition



X = pd.concat([X_train, X_devel])
y = pd.concat([y_train, y_devel])
test_fold = -X.index.str.contains('train').astype(int)
ps = PredefinedSplit(test_fold)


The GridSearch definition



n_estimators = [10]
max_depth = [4]
grid = 'n_estimators': n_estimators, 'max_depth': max_depth

rf = RandomForestClassifier(random_state=0)
rf_grid = GridSearchCV(estimator = rf, param_grid = grid, cv = ps, scoring='recall_macro')
rf_grid.fit(X, y)


The classifier definition



clf = RandomForestClassifier(n_estimators=10, max_depth=4, random_state=0)
clf.fit(X_train, y_train)


The recall was calculated explicitly using sklearn.metrics.recall_score



y_pred_train = clf.predict(X_train)
y_pred_devel = clf.predict(X_devel)

uar_train = recall_score(y_pred_train, y_train, average='macro')
uar_devel = recall_score(y_pred_devel, y_devel, average='macro')


GridSearch



uar train: 0.32189884516029466
uar devel: 0.3328299259976279


Random Forest:



uar train: 0.483040291148839
uar devel: 0.40706644557392435


What is the reason for such a mismatch?










share|improve this question
































    1















    I am using sklearn GridSearch to find best parameters for random forest classification using a predefined validation set. The scores from the best estimator returned by GridSearch do not match the scores obtained by training a separate classifier with the same parameters.



    The data split definition



    X = pd.concat([X_train, X_devel])
    y = pd.concat([y_train, y_devel])
    test_fold = -X.index.str.contains('train').astype(int)
    ps = PredefinedSplit(test_fold)


    The GridSearch definition



    n_estimators = [10]
    max_depth = [4]
    grid = 'n_estimators': n_estimators, 'max_depth': max_depth

    rf = RandomForestClassifier(random_state=0)
    rf_grid = GridSearchCV(estimator = rf, param_grid = grid, cv = ps, scoring='recall_macro')
    rf_grid.fit(X, y)


    The classifier definition



    clf = RandomForestClassifier(n_estimators=10, max_depth=4, random_state=0)
    clf.fit(X_train, y_train)


    The recall was calculated explicitly using sklearn.metrics.recall_score



    y_pred_train = clf.predict(X_train)
    y_pred_devel = clf.predict(X_devel)

    uar_train = recall_score(y_pred_train, y_train, average='macro')
    uar_devel = recall_score(y_pred_devel, y_devel, average='macro')


    GridSearch



    uar train: 0.32189884516029466
    uar devel: 0.3328299259976279


    Random Forest:



    uar train: 0.483040291148839
    uar devel: 0.40706644557392435


    What is the reason for such a mismatch?










    share|improve this question




























      1












      1








      1








      I am using sklearn GridSearch to find best parameters for random forest classification using a predefined validation set. The scores from the best estimator returned by GridSearch do not match the scores obtained by training a separate classifier with the same parameters.



      The data split definition



      X = pd.concat([X_train, X_devel])
      y = pd.concat([y_train, y_devel])
      test_fold = -X.index.str.contains('train').astype(int)
      ps = PredefinedSplit(test_fold)


      The GridSearch definition



      n_estimators = [10]
      max_depth = [4]
      grid = 'n_estimators': n_estimators, 'max_depth': max_depth

      rf = RandomForestClassifier(random_state=0)
      rf_grid = GridSearchCV(estimator = rf, param_grid = grid, cv = ps, scoring='recall_macro')
      rf_grid.fit(X, y)


      The classifier definition



      clf = RandomForestClassifier(n_estimators=10, max_depth=4, random_state=0)
      clf.fit(X_train, y_train)


      The recall was calculated explicitly using sklearn.metrics.recall_score



      y_pred_train = clf.predict(X_train)
      y_pred_devel = clf.predict(X_devel)

      uar_train = recall_score(y_pred_train, y_train, average='macro')
      uar_devel = recall_score(y_pred_devel, y_devel, average='macro')


      GridSearch



      uar train: 0.32189884516029466
      uar devel: 0.3328299259976279


      Random Forest:



      uar train: 0.483040291148839
      uar devel: 0.40706644557392435


      What is the reason for such a mismatch?










      share|improve this question
















      I am using sklearn GridSearch to find best parameters for random forest classification using a predefined validation set. The scores from the best estimator returned by GridSearch do not match the scores obtained by training a separate classifier with the same parameters.



      The data split definition



      X = pd.concat([X_train, X_devel])
      y = pd.concat([y_train, y_devel])
      test_fold = -X.index.str.contains('train').astype(int)
      ps = PredefinedSplit(test_fold)


      The GridSearch definition



      n_estimators = [10]
      max_depth = [4]
      grid = 'n_estimators': n_estimators, 'max_depth': max_depth

      rf = RandomForestClassifier(random_state=0)
      rf_grid = GridSearchCV(estimator = rf, param_grid = grid, cv = ps, scoring='recall_macro')
      rf_grid.fit(X, y)


      The classifier definition



      clf = RandomForestClassifier(n_estimators=10, max_depth=4, random_state=0)
      clf.fit(X_train, y_train)


      The recall was calculated explicitly using sklearn.metrics.recall_score



      y_pred_train = clf.predict(X_train)
      y_pred_devel = clf.predict(X_devel)

      uar_train = recall_score(y_pred_train, y_train, average='macro')
      uar_devel = recall_score(y_pred_devel, y_devel, average='macro')


      GridSearch



      uar train: 0.32189884516029466
      uar devel: 0.3328299259976279


      Random Forest:



      uar train: 0.483040291148839
      uar devel: 0.40706644557392435


      What is the reason for such a mismatch?







      python validation scikit-learn grid-search scoring






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 27 at 10:00







      Oxana Verkholyak

















      asked Mar 27 at 7:43









      Oxana VerkholyakOxana Verkholyak

      206 bronze badges




      206 bronze badges

























          2 Answers
          2






          active

          oldest

          votes


















          2














          There are multiple issues here:




          1. Your input arguments to recall_score are reversed. The actual correct order is:



            recall_score(y_true, y_test)


            But you are are doing:



            recall_score(y_pred_train, y_train, average='macro')


            Correct that to:



            recall_score(y_train, y_pred_train, average='macro')


          2. You are doing rf_grid.fit(X, y) for grid-search. That means that after finding the best parameter combinations, the GridSearchCV will fit the whole data (whole X, ignoring the PredefinedSplit because that's only used during cross-validation in search of best parameters). So in essence, the estimator from GridSearchCV will have seen the whole data, so scores will be different from what you get when you do clf.fit(X_train, y_train)






          share|improve this answer



























          • Thanks for the insights. Something still unclear after changing the order of the arguments, the recall after retraining on the whole dataset is expected to improve (since the classifier have now seen all the data), however it remains lower for both train and devel subsets. Any clues why is that?

            – Oxana Verkholyak
            Mar 27 at 10:35











          • @OxanaVerkholyak I'm sorry I cannot say anything more without seeing the data samples. There can be many things: 1) Is your train-test split balanced? 2) Is your data imbalanced? 3) How many classes are there?. "recall_macro" does not take label imbalance into account. Maybe that could be reason. What about other metrics, accuracy, confusion matrix etc. Please post the complete code along with some sample data which may produce this result.

            – Vivek Kumar
            Mar 27 at 10:39











          • Ok, maybe I will open a new discussion for that :) For now, you have answered my question, many thanks!

            – Oxana Verkholyak
            Mar 27 at 10:43


















          0














          It's because in your GridSearchCV you are using the scoring function as recall-macro which basically return the recall score which is macro averaged. See this link.



          However, when you are returning the default score from your RandomForestClassifier it returns the mean accuracy. So, that is why the scores are different. See this link for info on the same. (Since one is recall and the other is accuracy).






          share|improve this answer

























          • Thanks for the reply, however, I must have mentioned that I explicitly computed the recall using the sklearn.metrics.recall_score

            – Oxana Verkholyak
            Mar 27 at 9:57












          • @OxanaVerkholyak can you please post the code through which you printed the different scores for standalone classifier ?

            – Mohammed Kashif
            Mar 27 at 10:01











          • I have editted the original post please refer above for the recall calculation code

            – Oxana Verkholyak
            Mar 27 at 10:05














          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372067%2fsklearn-gridsearch-with-predefinedsplit-scoring-does-not-match-a-standalone-clas%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          There are multiple issues here:




          1. Your input arguments to recall_score are reversed. The actual correct order is:



            recall_score(y_true, y_test)


            But you are are doing:



            recall_score(y_pred_train, y_train, average='macro')


            Correct that to:



            recall_score(y_train, y_pred_train, average='macro')


          2. You are doing rf_grid.fit(X, y) for grid-search. That means that after finding the best parameter combinations, the GridSearchCV will fit the whole data (whole X, ignoring the PredefinedSplit because that's only used during cross-validation in search of best parameters). So in essence, the estimator from GridSearchCV will have seen the whole data, so scores will be different from what you get when you do clf.fit(X_train, y_train)






          share|improve this answer



























          • Thanks for the insights. Something still unclear after changing the order of the arguments, the recall after retraining on the whole dataset is expected to improve (since the classifier have now seen all the data), however it remains lower for both train and devel subsets. Any clues why is that?

            – Oxana Verkholyak
            Mar 27 at 10:35











          • @OxanaVerkholyak I'm sorry I cannot say anything more without seeing the data samples. There can be many things: 1) Is your train-test split balanced? 2) Is your data imbalanced? 3) How many classes are there?. "recall_macro" does not take label imbalance into account. Maybe that could be reason. What about other metrics, accuracy, confusion matrix etc. Please post the complete code along with some sample data which may produce this result.

            – Vivek Kumar
            Mar 27 at 10:39











          • Ok, maybe I will open a new discussion for that :) For now, you have answered my question, many thanks!

            – Oxana Verkholyak
            Mar 27 at 10:43















          2














          There are multiple issues here:




          1. Your input arguments to recall_score are reversed. The actual correct order is:



            recall_score(y_true, y_test)


            But you are are doing:



            recall_score(y_pred_train, y_train, average='macro')


            Correct that to:



            recall_score(y_train, y_pred_train, average='macro')


          2. You are doing rf_grid.fit(X, y) for grid-search. That means that after finding the best parameter combinations, the GridSearchCV will fit the whole data (whole X, ignoring the PredefinedSplit because that's only used during cross-validation in search of best parameters). So in essence, the estimator from GridSearchCV will have seen the whole data, so scores will be different from what you get when you do clf.fit(X_train, y_train)






          share|improve this answer



























          • Thanks for the insights. Something still unclear after changing the order of the arguments, the recall after retraining on the whole dataset is expected to improve (since the classifier have now seen all the data), however it remains lower for both train and devel subsets. Any clues why is that?

            – Oxana Verkholyak
            Mar 27 at 10:35











          • @OxanaVerkholyak I'm sorry I cannot say anything more without seeing the data samples. There can be many things: 1) Is your train-test split balanced? 2) Is your data imbalanced? 3) How many classes are there?. "recall_macro" does not take label imbalance into account. Maybe that could be reason. What about other metrics, accuracy, confusion matrix etc. Please post the complete code along with some sample data which may produce this result.

            – Vivek Kumar
            Mar 27 at 10:39











          • Ok, maybe I will open a new discussion for that :) For now, you have answered my question, many thanks!

            – Oxana Verkholyak
            Mar 27 at 10:43













          2












          2








          2







          There are multiple issues here:




          1. Your input arguments to recall_score are reversed. The actual correct order is:



            recall_score(y_true, y_test)


            But you are are doing:



            recall_score(y_pred_train, y_train, average='macro')


            Correct that to:



            recall_score(y_train, y_pred_train, average='macro')


          2. You are doing rf_grid.fit(X, y) for grid-search. That means that after finding the best parameter combinations, the GridSearchCV will fit the whole data (whole X, ignoring the PredefinedSplit because that's only used during cross-validation in search of best parameters). So in essence, the estimator from GridSearchCV will have seen the whole data, so scores will be different from what you get when you do clf.fit(X_train, y_train)






          share|improve this answer















          There are multiple issues here:




          1. Your input arguments to recall_score are reversed. The actual correct order is:



            recall_score(y_true, y_test)


            But you are are doing:



            recall_score(y_pred_train, y_train, average='macro')


            Correct that to:



            recall_score(y_train, y_pred_train, average='macro')


          2. You are doing rf_grid.fit(X, y) for grid-search. That means that after finding the best parameter combinations, the GridSearchCV will fit the whole data (whole X, ignoring the PredefinedSplit because that's only used during cross-validation in search of best parameters). So in essence, the estimator from GridSearchCV will have seen the whole data, so scores will be different from what you get when you do clf.fit(X_train, y_train)







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 27 at 10:25

























          answered Mar 27 at 10:18









          Vivek KumarVivek Kumar

          19k6 gold badges30 silver badges61 bronze badges




          19k6 gold badges30 silver badges61 bronze badges















          • Thanks for the insights. Something still unclear after changing the order of the arguments, the recall after retraining on the whole dataset is expected to improve (since the classifier have now seen all the data), however it remains lower for both train and devel subsets. Any clues why is that?

            – Oxana Verkholyak
            Mar 27 at 10:35











          • @OxanaVerkholyak I'm sorry I cannot say anything more without seeing the data samples. There can be many things: 1) Is your train-test split balanced? 2) Is your data imbalanced? 3) How many classes are there?. "recall_macro" does not take label imbalance into account. Maybe that could be reason. What about other metrics, accuracy, confusion matrix etc. Please post the complete code along with some sample data which may produce this result.

            – Vivek Kumar
            Mar 27 at 10:39











          • Ok, maybe I will open a new discussion for that :) For now, you have answered my question, many thanks!

            – Oxana Verkholyak
            Mar 27 at 10:43

















          • Thanks for the insights. Something still unclear after changing the order of the arguments, the recall after retraining on the whole dataset is expected to improve (since the classifier have now seen all the data), however it remains lower for both train and devel subsets. Any clues why is that?

            – Oxana Verkholyak
            Mar 27 at 10:35











          • @OxanaVerkholyak I'm sorry I cannot say anything more without seeing the data samples. There can be many things: 1) Is your train-test split balanced? 2) Is your data imbalanced? 3) How many classes are there?. "recall_macro" does not take label imbalance into account. Maybe that could be reason. What about other metrics, accuracy, confusion matrix etc. Please post the complete code along with some sample data which may produce this result.

            – Vivek Kumar
            Mar 27 at 10:39











          • Ok, maybe I will open a new discussion for that :) For now, you have answered my question, many thanks!

            – Oxana Verkholyak
            Mar 27 at 10:43
















          Thanks for the insights. Something still unclear after changing the order of the arguments, the recall after retraining on the whole dataset is expected to improve (since the classifier have now seen all the data), however it remains lower for both train and devel subsets. Any clues why is that?

          – Oxana Verkholyak
          Mar 27 at 10:35





          Thanks for the insights. Something still unclear after changing the order of the arguments, the recall after retraining on the whole dataset is expected to improve (since the classifier have now seen all the data), however it remains lower for both train and devel subsets. Any clues why is that?

          – Oxana Verkholyak
          Mar 27 at 10:35













          @OxanaVerkholyak I'm sorry I cannot say anything more without seeing the data samples. There can be many things: 1) Is your train-test split balanced? 2) Is your data imbalanced? 3) How many classes are there?. "recall_macro" does not take label imbalance into account. Maybe that could be reason. What about other metrics, accuracy, confusion matrix etc. Please post the complete code along with some sample data which may produce this result.

          – Vivek Kumar
          Mar 27 at 10:39





          @OxanaVerkholyak I'm sorry I cannot say anything more without seeing the data samples. There can be many things: 1) Is your train-test split balanced? 2) Is your data imbalanced? 3) How many classes are there?. "recall_macro" does not take label imbalance into account. Maybe that could be reason. What about other metrics, accuracy, confusion matrix etc. Please post the complete code along with some sample data which may produce this result.

          – Vivek Kumar
          Mar 27 at 10:39













          Ok, maybe I will open a new discussion for that :) For now, you have answered my question, many thanks!

          – Oxana Verkholyak
          Mar 27 at 10:43





          Ok, maybe I will open a new discussion for that :) For now, you have answered my question, many thanks!

          – Oxana Verkholyak
          Mar 27 at 10:43













          0














          It's because in your GridSearchCV you are using the scoring function as recall-macro which basically return the recall score which is macro averaged. See this link.



          However, when you are returning the default score from your RandomForestClassifier it returns the mean accuracy. So, that is why the scores are different. See this link for info on the same. (Since one is recall and the other is accuracy).






          share|improve this answer

























          • Thanks for the reply, however, I must have mentioned that I explicitly computed the recall using the sklearn.metrics.recall_score

            – Oxana Verkholyak
            Mar 27 at 9:57












          • @OxanaVerkholyak can you please post the code through which you printed the different scores for standalone classifier ?

            – Mohammed Kashif
            Mar 27 at 10:01











          • I have editted the original post please refer above for the recall calculation code

            – Oxana Verkholyak
            Mar 27 at 10:05
















          0














          It's because in your GridSearchCV you are using the scoring function as recall-macro which basically return the recall score which is macro averaged. See this link.



          However, when you are returning the default score from your RandomForestClassifier it returns the mean accuracy. So, that is why the scores are different. See this link for info on the same. (Since one is recall and the other is accuracy).






          share|improve this answer

























          • Thanks for the reply, however, I must have mentioned that I explicitly computed the recall using the sklearn.metrics.recall_score

            – Oxana Verkholyak
            Mar 27 at 9:57












          • @OxanaVerkholyak can you please post the code through which you printed the different scores for standalone classifier ?

            – Mohammed Kashif
            Mar 27 at 10:01











          • I have editted the original post please refer above for the recall calculation code

            – Oxana Verkholyak
            Mar 27 at 10:05














          0












          0








          0







          It's because in your GridSearchCV you are using the scoring function as recall-macro which basically return the recall score which is macro averaged. See this link.



          However, when you are returning the default score from your RandomForestClassifier it returns the mean accuracy. So, that is why the scores are different. See this link for info on the same. (Since one is recall and the other is accuracy).






          share|improve this answer













          It's because in your GridSearchCV you are using the scoring function as recall-macro which basically return the recall score which is macro averaged. See this link.



          However, when you are returning the default score from your RandomForestClassifier it returns the mean accuracy. So, that is why the scores are different. See this link for info on the same. (Since one is recall and the other is accuracy).







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 27 at 9:16









          Mohammed KashifMohammed Kashif

          5,1741 gold badge8 silver badges26 bronze badges




          5,1741 gold badge8 silver badges26 bronze badges















          • Thanks for the reply, however, I must have mentioned that I explicitly computed the recall using the sklearn.metrics.recall_score

            – Oxana Verkholyak
            Mar 27 at 9:57












          • @OxanaVerkholyak can you please post the code through which you printed the different scores for standalone classifier ?

            – Mohammed Kashif
            Mar 27 at 10:01











          • I have editted the original post please refer above for the recall calculation code

            – Oxana Verkholyak
            Mar 27 at 10:05


















          • Thanks for the reply, however, I must have mentioned that I explicitly computed the recall using the sklearn.metrics.recall_score

            – Oxana Verkholyak
            Mar 27 at 9:57












          • @OxanaVerkholyak can you please post the code through which you printed the different scores for standalone classifier ?

            – Mohammed Kashif
            Mar 27 at 10:01











          • I have editted the original post please refer above for the recall calculation code

            – Oxana Verkholyak
            Mar 27 at 10:05

















          Thanks for the reply, however, I must have mentioned that I explicitly computed the recall using the sklearn.metrics.recall_score

          – Oxana Verkholyak
          Mar 27 at 9:57






          Thanks for the reply, however, I must have mentioned that I explicitly computed the recall using the sklearn.metrics.recall_score

          – Oxana Verkholyak
          Mar 27 at 9:57














          @OxanaVerkholyak can you please post the code through which you printed the different scores for standalone classifier ?

          – Mohammed Kashif
          Mar 27 at 10:01





          @OxanaVerkholyak can you please post the code through which you printed the different scores for standalone classifier ?

          – Mohammed Kashif
          Mar 27 at 10:01













          I have editted the original post please refer above for the recall calculation code

          – Oxana Verkholyak
          Mar 27 at 10:05






          I have editted the original post please refer above for the recall calculation code

          – Oxana Verkholyak
          Mar 27 at 10:05


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372067%2fsklearn-gridsearch-with-predefinedsplit-scoring-does-not-match-a-standalone-clas%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현