Saving a custom tf.estimator trained model for tensorflow serving The 2019 Stack Overflow Developer Survey Results Are In Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara The Ask Question Wizard is Live! Data science time! April 2019 and salary with experienceTensorflow: how to save/restore a model?TensorFlow restore/deploy network without the model?Relationship between tensorflow saver, exporter and save modelSave model for Tensorflow ServingLoading trained Tensorflow model into estimatorDeploying TensorFlow Checkpoint to Google Cloud PlatformTypeError: __init__() missing 2 required positional arguments: 'message' and 'code'How to properly reduce the size of a tensorflow savedmodel?Serving retrained Tensorflow Hub module with new featuresHow to do prediction using trained and stored tensorflow model

How do spell lists change if the party levels up without taking a long rest?

Circular reasoning in L'Hopital's rule

What aspect of planet Earth must be changed to prevent the industrial revolution?

Example of compact Riemannian manifold with only one geodesic.

Homework question about an engine pulling a train

Identify 80s or 90s comics with ripped creatures (not dwarves)

Was credit for the black hole image misappropriated?

Could an empire control the whole planet with today's comunication methods?

How do you keep chess fun when your opponent constantly beats you?

What force causes entropy to increase?

How many cones with angle theta can I pack into the unit sphere?

Match Roman Numerals

What is the padding with red substance inside of steak packaging?

Button changing its text & action. Good or terrible?

Can the Right Ascension and Argument of Perigee of a spacecraft's orbit keep varying by themselves with time?

Store Dynamic-accessible hidden metadata in a cell

Why doesn't a hydraulic lever violate conservation of energy?

Would an alien lifeform be able to achieve space travel if lacking in vision?

Is there a way to generate uniformly distributed points on a sphere from a fixed amount of random real numbers per point?

Intergalactic human space ship encounters another ship, character gets shunted off beyond known universe, reality starts collapsing

What does "spokes" mean in this context?

How did the crowd guess the pentatonic scale in Bobby McFerrin's presentation?

Is every episode of "Where are my Pants?" identical?

Can withdrawing asylum be illegal?



Saving a custom tf.estimator trained model for tensorflow serving



The 2019 Stack Overflow Developer Survey Results Are In
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
The Ask Question Wizard is Live!
Data science time! April 2019 and salary with experienceTensorflow: how to save/restore a model?TensorFlow restore/deploy network without the model?Relationship between tensorflow saver, exporter and save modelSave model for Tensorflow ServingLoading trained Tensorflow model into estimatorDeploying TensorFlow Checkpoint to Google Cloud PlatformTypeError: __init__() missing 2 required positional arguments: 'message' and 'code'How to properly reduce the size of a tensorflow savedmodel?Serving retrained Tensorflow Hub module with new featuresHow to do prediction using trained and stored tensorflow model



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















If I have a tensorflow model using a custom estimator, how would I save the model so that I can deploy it for production.



https://colab.research.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb#scrollTo=JIhejfpyJ8Bx



The model I'm using is similar to this one and was wondering how to save the model once its been trained. Have tried using Savedmodel and restoring using checkpoints and have been unsuccessful with both (was unable to adapt it for this example)










share|improve this question




























    0















    If I have a tensorflow model using a custom estimator, how would I save the model so that I can deploy it for production.



    https://colab.research.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb#scrollTo=JIhejfpyJ8Bx



    The model I'm using is similar to this one and was wondering how to save the model once its been trained. Have tried using Savedmodel and restoring using checkpoints and have been unsuccessful with both (was unable to adapt it for this example)










    share|improve this question
























      0












      0








      0








      If I have a tensorflow model using a custom estimator, how would I save the model so that I can deploy it for production.



      https://colab.research.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb#scrollTo=JIhejfpyJ8Bx



      The model I'm using is similar to this one and was wondering how to save the model once its been trained. Have tried using Savedmodel and restoring using checkpoints and have been unsuccessful with both (was unable to adapt it for this example)










      share|improve this question














      If I have a tensorflow model using a custom estimator, how would I save the model so that I can deploy it for production.



      https://colab.research.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb#scrollTo=JIhejfpyJ8Bx



      The model I'm using is similar to this one and was wondering how to save the model once its been trained. Have tried using Savedmodel and restoring using checkpoints and have been unsuccessful with both (was unable to adapt it for this example)







      python tensorflow tensorflow-serving tensorflow-estimator






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 22 at 5:37









      Abhinav23Abhinav23

      12




      12






















          1 Answer
          1






          active

          oldest

          votes


















          0














          One way to do this is via gRPC.
          TF has some not so straightforward documentation there: https://www.tensorflow.org/tfx/serving/serving_basic
          The hardest bit is actually saving your model, afterwards hosting it via docker has quite a bit documentation.
          Finally, you can infer on it using a gRPC client, i.e https://github.com/epigramai/tfserving-python-predict-client



          To do this, you need to save your model first. Something like this, where you will need to tweak it for your example a bit:



           def save_serving_model(self,estimator):
          feature_placeholder = 'sentence': tf.placeholder('string', [1], name='sentence_placeholder')

          # The build_raw_serving_input_receiver_fn doesn't serialize inputs so avoids confusion with bytes and strings. You can simply pass a string.
          serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_placeholder)

          # Save the model
          estimator.export_savedmodel("./TEST_Dir", serving_input_fn)



          This will save a model in the TEST_Dir.
          As a quick test you can do:



          saved_model_cli run --dir /path/to/mode/ --tag_set serve --signature_def predict --input_exprs="sentence=['This API is a little tricky']"


          The next step is hosting this model, or "serving" it. The way I do this is via docker, i.e. a command like



          docker run -p 8500:8500 
          --mount type=bind,source=/tmp/mnist,target=/models/mnist
          -e MODEL_NAME=mnist -t tensorflow/serving &



          Finally, you can use the predict client (via gRPC) to pass a sentence to your server and return the result. The github link I added above has two blog posts regarding that.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55293484%2fsaving-a-custom-tf-estimator-trained-model-for-tensorflow-serving%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            One way to do this is via gRPC.
            TF has some not so straightforward documentation there: https://www.tensorflow.org/tfx/serving/serving_basic
            The hardest bit is actually saving your model, afterwards hosting it via docker has quite a bit documentation.
            Finally, you can infer on it using a gRPC client, i.e https://github.com/epigramai/tfserving-python-predict-client



            To do this, you need to save your model first. Something like this, where you will need to tweak it for your example a bit:



             def save_serving_model(self,estimator):
            feature_placeholder = 'sentence': tf.placeholder('string', [1], name='sentence_placeholder')

            # The build_raw_serving_input_receiver_fn doesn't serialize inputs so avoids confusion with bytes and strings. You can simply pass a string.
            serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_placeholder)

            # Save the model
            estimator.export_savedmodel("./TEST_Dir", serving_input_fn)



            This will save a model in the TEST_Dir.
            As a quick test you can do:



            saved_model_cli run --dir /path/to/mode/ --tag_set serve --signature_def predict --input_exprs="sentence=['This API is a little tricky']"


            The next step is hosting this model, or "serving" it. The way I do this is via docker, i.e. a command like



            docker run -p 8500:8500 
            --mount type=bind,source=/tmp/mnist,target=/models/mnist
            -e MODEL_NAME=mnist -t tensorflow/serving &



            Finally, you can use the predict client (via gRPC) to pass a sentence to your server and return the result. The github link I added above has two blog posts regarding that.






            share|improve this answer



























              0














              One way to do this is via gRPC.
              TF has some not so straightforward documentation there: https://www.tensorflow.org/tfx/serving/serving_basic
              The hardest bit is actually saving your model, afterwards hosting it via docker has quite a bit documentation.
              Finally, you can infer on it using a gRPC client, i.e https://github.com/epigramai/tfserving-python-predict-client



              To do this, you need to save your model first. Something like this, where you will need to tweak it for your example a bit:



               def save_serving_model(self,estimator):
              feature_placeholder = 'sentence': tf.placeholder('string', [1], name='sentence_placeholder')

              # The build_raw_serving_input_receiver_fn doesn't serialize inputs so avoids confusion with bytes and strings. You can simply pass a string.
              serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_placeholder)

              # Save the model
              estimator.export_savedmodel("./TEST_Dir", serving_input_fn)



              This will save a model in the TEST_Dir.
              As a quick test you can do:



              saved_model_cli run --dir /path/to/mode/ --tag_set serve --signature_def predict --input_exprs="sentence=['This API is a little tricky']"


              The next step is hosting this model, or "serving" it. The way I do this is via docker, i.e. a command like



              docker run -p 8500:8500 
              --mount type=bind,source=/tmp/mnist,target=/models/mnist
              -e MODEL_NAME=mnist -t tensorflow/serving &



              Finally, you can use the predict client (via gRPC) to pass a sentence to your server and return the result. The github link I added above has two blog posts regarding that.






              share|improve this answer

























                0












                0








                0







                One way to do this is via gRPC.
                TF has some not so straightforward documentation there: https://www.tensorflow.org/tfx/serving/serving_basic
                The hardest bit is actually saving your model, afterwards hosting it via docker has quite a bit documentation.
                Finally, you can infer on it using a gRPC client, i.e https://github.com/epigramai/tfserving-python-predict-client



                To do this, you need to save your model first. Something like this, where you will need to tweak it for your example a bit:



                 def save_serving_model(self,estimator):
                feature_placeholder = 'sentence': tf.placeholder('string', [1], name='sentence_placeholder')

                # The build_raw_serving_input_receiver_fn doesn't serialize inputs so avoids confusion with bytes and strings. You can simply pass a string.
                serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_placeholder)

                # Save the model
                estimator.export_savedmodel("./TEST_Dir", serving_input_fn)



                This will save a model in the TEST_Dir.
                As a quick test you can do:



                saved_model_cli run --dir /path/to/mode/ --tag_set serve --signature_def predict --input_exprs="sentence=['This API is a little tricky']"


                The next step is hosting this model, or "serving" it. The way I do this is via docker, i.e. a command like



                docker run -p 8500:8500 
                --mount type=bind,source=/tmp/mnist,target=/models/mnist
                -e MODEL_NAME=mnist -t tensorflow/serving &



                Finally, you can use the predict client (via gRPC) to pass a sentence to your server and return the result. The github link I added above has two blog posts regarding that.






                share|improve this answer













                One way to do this is via gRPC.
                TF has some not so straightforward documentation there: https://www.tensorflow.org/tfx/serving/serving_basic
                The hardest bit is actually saving your model, afterwards hosting it via docker has quite a bit documentation.
                Finally, you can infer on it using a gRPC client, i.e https://github.com/epigramai/tfserving-python-predict-client



                To do this, you need to save your model first. Something like this, where you will need to tweak it for your example a bit:



                 def save_serving_model(self,estimator):
                feature_placeholder = 'sentence': tf.placeholder('string', [1], name='sentence_placeholder')

                # The build_raw_serving_input_receiver_fn doesn't serialize inputs so avoids confusion with bytes and strings. You can simply pass a string.
                serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(feature_placeholder)

                # Save the model
                estimator.export_savedmodel("./TEST_Dir", serving_input_fn)



                This will save a model in the TEST_Dir.
                As a quick test you can do:



                saved_model_cli run --dir /path/to/mode/ --tag_set serve --signature_def predict --input_exprs="sentence=['This API is a little tricky']"


                The next step is hosting this model, or "serving" it. The way I do this is via docker, i.e. a command like



                docker run -p 8500:8500 
                --mount type=bind,source=/tmp/mnist,target=/models/mnist
                -e MODEL_NAME=mnist -t tensorflow/serving &



                Finally, you can use the predict client (via gRPC) to pass a sentence to your server and return the result. The github link I added above has two blog posts regarding that.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Apr 2 at 15:58









                jwsmithersjwsmithers

                55310




                55310





























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55293484%2fsaving-a-custom-tf-estimator-trained-model-for-tensorflow-serving%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

                    SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

                    은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현