Output from elmo pretrained modelTensorflow: how to save/restore a model?How does Fine-tuning Word Embeddings work?How to use pretrained Word2Vec model in TensorflowCharacter-Word Embeddings from lm_1b in KerasSimple Feedforward Neural Network with TensorFlow won't learnELMo - How to train trainable parametersTF Eager Pretrained ModelsHow to represent ELMo embeddings as a 1D array?How to do transfer learning in sentiment analysis?Getting Errors while running elmo embeddings in google colab
What does "play with your toy’s toys" mean?
LWC - Best practice for routing?
How does a blind passenger not die, if driver becomes unconscious
Can someone suggest a path to study Mordell-Weil theorem for someone studying on their own?
Trainee keeps missing deadlines for independent learning
Employer wants to use my work email account after I quit
What reason would an alien civilization have for building a Dyson Sphere (or Swarm) if cheap Nuclear fusion is available?
How does DC work with natural 20?
Array initialization optimization
If I wouldn't want to read the story, is writing it still a good idea?
Why use cross notes in sheet music for hip hop tracks?
How to remove this component from PCB
Do I have to explain the mechanical superiority of the player-character within the fiction of the game?
Who are the remaining King/Queenslayers?
Why do textbooks often include the solutions to odd or even numbered problems but not both?
Can humans ever directly see a few photons at a time? Can a human see a single photon?
How long would it take to cross the Channel in 1890's?
Why don't countries like Japan just print more money?
Output of "$OSTYPE:6" on old releases of Mac OS X
Inaccessible base class despite friendship
What size of powerbank will I need to power a phone and DSLR for 2 weeks?
What is "industrial ethernet"?
Why does the Saturn V have standalone inter-stage rings?
What does it mean to "control target player"?
Output from elmo pretrained model
Tensorflow: how to save/restore a model?How does Fine-tuning Word Embeddings work?How to use pretrained Word2Vec model in TensorflowCharacter-Word Embeddings from lm_1b in KerasSimple Feedforward Neural Network with TensorFlow won't learnELMo - How to train trainable parametersTF Eager Pretrained ModelsHow to represent ELMo embeddings as a 1D array?How to do transfer learning in sentiment analysis?Getting Errors while running elmo embeddings in google colab
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I am working on sentiment analysis. I am using elmo method to get word embeddings. But i am confused with the output this method is giving. Consider the code given in tensor flow website:
elmo = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True)
embeddings = elmo(["the cat is on the mat", "dogs are in the fog"],
signature="default",as_dict=True)["elmo"]
The embedding vectors for a particular sentence vary based on the number of strings you give. To explain in detail let
x = "the cat is on the mat"
y = "dogs are in the fog"
x1 = elmo([x],signature="default",as_dict=True)["elmo"]
z1 = elmo([x,y] ,signature="default",as_dict=True)["elmo"]
So x1[0]
will not be equal to z1[0]
. This changes as you change the input list of strings. Why is the output for one sentence depends on the other. I am not training the data. I am only using an existing pretrained model. As this is the case, I am confused how to convert my comments text to embeddings and use for sentiment analysis. Please explain.
Note :To get the embedding vectors I use the following code:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
# return average of ELMo features
return sess.run(tf.reduce_mean(x1,1))
tensorflow sentiment-analysis word-embedding tensorflow-hub elmo
add a comment |
I am working on sentiment analysis. I am using elmo method to get word embeddings. But i am confused with the output this method is giving. Consider the code given in tensor flow website:
elmo = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True)
embeddings = elmo(["the cat is on the mat", "dogs are in the fog"],
signature="default",as_dict=True)["elmo"]
The embedding vectors for a particular sentence vary based on the number of strings you give. To explain in detail let
x = "the cat is on the mat"
y = "dogs are in the fog"
x1 = elmo([x],signature="default",as_dict=True)["elmo"]
z1 = elmo([x,y] ,signature="default",as_dict=True)["elmo"]
So x1[0]
will not be equal to z1[0]
. This changes as you change the input list of strings. Why is the output for one sentence depends on the other. I am not training the data. I am only using an existing pretrained model. As this is the case, I am confused how to convert my comments text to embeddings and use for sentiment analysis. Please explain.
Note :To get the embedding vectors I use the following code:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
# return average of ELMo features
return sess.run(tf.reduce_mean(x1,1))
tensorflow sentiment-analysis word-embedding tensorflow-hub elmo
add a comment |
I am working on sentiment analysis. I am using elmo method to get word embeddings. But i am confused with the output this method is giving. Consider the code given in tensor flow website:
elmo = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True)
embeddings = elmo(["the cat is on the mat", "dogs are in the fog"],
signature="default",as_dict=True)["elmo"]
The embedding vectors for a particular sentence vary based on the number of strings you give. To explain in detail let
x = "the cat is on the mat"
y = "dogs are in the fog"
x1 = elmo([x],signature="default",as_dict=True)["elmo"]
z1 = elmo([x,y] ,signature="default",as_dict=True)["elmo"]
So x1[0]
will not be equal to z1[0]
. This changes as you change the input list of strings. Why is the output for one sentence depends on the other. I am not training the data. I am only using an existing pretrained model. As this is the case, I am confused how to convert my comments text to embeddings and use for sentiment analysis. Please explain.
Note :To get the embedding vectors I use the following code:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
# return average of ELMo features
return sess.run(tf.reduce_mean(x1,1))
tensorflow sentiment-analysis word-embedding tensorflow-hub elmo
I am working on sentiment analysis. I am using elmo method to get word embeddings. But i am confused with the output this method is giving. Consider the code given in tensor flow website:
elmo = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True)
embeddings = elmo(["the cat is on the mat", "dogs are in the fog"],
signature="default",as_dict=True)["elmo"]
The embedding vectors for a particular sentence vary based on the number of strings you give. To explain in detail let
x = "the cat is on the mat"
y = "dogs are in the fog"
x1 = elmo([x],signature="default",as_dict=True)["elmo"]
z1 = elmo([x,y] ,signature="default",as_dict=True)["elmo"]
So x1[0]
will not be equal to z1[0]
. This changes as you change the input list of strings. Why is the output for one sentence depends on the other. I am not training the data. I am only using an existing pretrained model. As this is the case, I am confused how to convert my comments text to embeddings and use for sentiment analysis. Please explain.
Note :To get the embedding vectors I use the following code:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
# return average of ELMo features
return sess.run(tf.reduce_mean(x1,1))
tensorflow sentiment-analysis word-embedding tensorflow-hub elmo
tensorflow sentiment-analysis word-embedding tensorflow-hub elmo
edited Mar 26 at 9:37
Karanam Krishna
asked Mar 25 at 8:46
Karanam KrishnaKaranam Krishna
96
96
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
When I run your code, x1[0] and z1[0] is the same. However, z1[1] differs from the result of
y1 = elmo([y],signature="default",as_dict=True)["elmo"]
return sess.run(tf.reduce_mean(y1,1))
because y has fewer tokens than x, and blindly reducing over outputs past-the-end will pick up junk.
I recommend using the "default" output instead of "elmo", which does the intended reduction. Please see the module documentation.
x1 = array([[ 0.05517201, -0.02187633, -0.17496817, ..., -0.36848053,0.09267851, 0.23179102]], dtype=float32) and z1 = array([[ 0.05517215, -0.02187647, -0.17496812, ..., -0.36848068, 0.09267855, 0.23179094], [-0.00665377, 0.12139908, -0.1935362 , ..., -0.08462355,0.07242572, 0.19882451]], dtype=float32) .Using ["elmo"], gave me different result(please observe in the last decimals) and this keeps on changing as you increase the list size i.e, we get different vectors for one sentence.
– Karanam Krishna
Mar 26 at 11:20
Also why does z1[1] differ with y1 ? The vector representation should be same for a sentence. The sentences(strings) in z = [x,y,...] are independent (consider the case of analyzing different tweets ). So irrespective of the size or the strings present in z , that should not affect the elmo vectors ,right ? I tried changing trainable = False also,but did not work
– Karanam Krishna
Mar 26 at 11:34
Re "last decimals": there's a lot of math happening here in single precision. I'm not exactly sure how the different examples in the batch interact with each other, but a match in five significant digits meets my bar of "equal" for deep learning.
– arnoegw
Mar 26 at 16:26
Re "differ with y1": Please see the module documentation for the significance of input lengths, and the recommendation to use output "default". When a recurrent neural network processes a batch of sequences with unequal lengths, it iterates up to the maximum length and leaves it to a postprocessing ("masking") step to delete the past-the-end outputs of shorter sequences.
– arnoegw
Mar 26 at 16:32
Re "last decimals" : Since there is a difference of only one string, the difference in the values is not considerable. But if (say) z = [list of 1000 strings], the values do change from the first decimal point itself. I did lot of checks. I am repeating 'we get different vectors for each sentence ' and why is that ? Are the weights getting trained (but i am not training the model,just extracting vectors from pretrained model ) ?
– Karanam Krishna
Mar 27 at 5:32
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55334020%2foutput-from-elmo-pretrained-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
When I run your code, x1[0] and z1[0] is the same. However, z1[1] differs from the result of
y1 = elmo([y],signature="default",as_dict=True)["elmo"]
return sess.run(tf.reduce_mean(y1,1))
because y has fewer tokens than x, and blindly reducing over outputs past-the-end will pick up junk.
I recommend using the "default" output instead of "elmo", which does the intended reduction. Please see the module documentation.
x1 = array([[ 0.05517201, -0.02187633, -0.17496817, ..., -0.36848053,0.09267851, 0.23179102]], dtype=float32) and z1 = array([[ 0.05517215, -0.02187647, -0.17496812, ..., -0.36848068, 0.09267855, 0.23179094], [-0.00665377, 0.12139908, -0.1935362 , ..., -0.08462355,0.07242572, 0.19882451]], dtype=float32) .Using ["elmo"], gave me different result(please observe in the last decimals) and this keeps on changing as you increase the list size i.e, we get different vectors for one sentence.
– Karanam Krishna
Mar 26 at 11:20
Also why does z1[1] differ with y1 ? The vector representation should be same for a sentence. The sentences(strings) in z = [x,y,...] are independent (consider the case of analyzing different tweets ). So irrespective of the size or the strings present in z , that should not affect the elmo vectors ,right ? I tried changing trainable = False also,but did not work
– Karanam Krishna
Mar 26 at 11:34
Re "last decimals": there's a lot of math happening here in single precision. I'm not exactly sure how the different examples in the batch interact with each other, but a match in five significant digits meets my bar of "equal" for deep learning.
– arnoegw
Mar 26 at 16:26
Re "differ with y1": Please see the module documentation for the significance of input lengths, and the recommendation to use output "default". When a recurrent neural network processes a batch of sequences with unequal lengths, it iterates up to the maximum length and leaves it to a postprocessing ("masking") step to delete the past-the-end outputs of shorter sequences.
– arnoegw
Mar 26 at 16:32
Re "last decimals" : Since there is a difference of only one string, the difference in the values is not considerable. But if (say) z = [list of 1000 strings], the values do change from the first decimal point itself. I did lot of checks. I am repeating 'we get different vectors for each sentence ' and why is that ? Are the weights getting trained (but i am not training the model,just extracting vectors from pretrained model ) ?
– Karanam Krishna
Mar 27 at 5:32
add a comment |
When I run your code, x1[0] and z1[0] is the same. However, z1[1] differs from the result of
y1 = elmo([y],signature="default",as_dict=True)["elmo"]
return sess.run(tf.reduce_mean(y1,1))
because y has fewer tokens than x, and blindly reducing over outputs past-the-end will pick up junk.
I recommend using the "default" output instead of "elmo", which does the intended reduction. Please see the module documentation.
x1 = array([[ 0.05517201, -0.02187633, -0.17496817, ..., -0.36848053,0.09267851, 0.23179102]], dtype=float32) and z1 = array([[ 0.05517215, -0.02187647, -0.17496812, ..., -0.36848068, 0.09267855, 0.23179094], [-0.00665377, 0.12139908, -0.1935362 , ..., -0.08462355,0.07242572, 0.19882451]], dtype=float32) .Using ["elmo"], gave me different result(please observe in the last decimals) and this keeps on changing as you increase the list size i.e, we get different vectors for one sentence.
– Karanam Krishna
Mar 26 at 11:20
Also why does z1[1] differ with y1 ? The vector representation should be same for a sentence. The sentences(strings) in z = [x,y,...] are independent (consider the case of analyzing different tweets ). So irrespective of the size or the strings present in z , that should not affect the elmo vectors ,right ? I tried changing trainable = False also,but did not work
– Karanam Krishna
Mar 26 at 11:34
Re "last decimals": there's a lot of math happening here in single precision. I'm not exactly sure how the different examples in the batch interact with each other, but a match in five significant digits meets my bar of "equal" for deep learning.
– arnoegw
Mar 26 at 16:26
Re "differ with y1": Please see the module documentation for the significance of input lengths, and the recommendation to use output "default". When a recurrent neural network processes a batch of sequences with unequal lengths, it iterates up to the maximum length and leaves it to a postprocessing ("masking") step to delete the past-the-end outputs of shorter sequences.
– arnoegw
Mar 26 at 16:32
Re "last decimals" : Since there is a difference of only one string, the difference in the values is not considerable. But if (say) z = [list of 1000 strings], the values do change from the first decimal point itself. I did lot of checks. I am repeating 'we get different vectors for each sentence ' and why is that ? Are the weights getting trained (but i am not training the model,just extracting vectors from pretrained model ) ?
– Karanam Krishna
Mar 27 at 5:32
add a comment |
When I run your code, x1[0] and z1[0] is the same. However, z1[1] differs from the result of
y1 = elmo([y],signature="default",as_dict=True)["elmo"]
return sess.run(tf.reduce_mean(y1,1))
because y has fewer tokens than x, and blindly reducing over outputs past-the-end will pick up junk.
I recommend using the "default" output instead of "elmo", which does the intended reduction. Please see the module documentation.
When I run your code, x1[0] and z1[0] is the same. However, z1[1] differs from the result of
y1 = elmo([y],signature="default",as_dict=True)["elmo"]
return sess.run(tf.reduce_mean(y1,1))
because y has fewer tokens than x, and blindly reducing over outputs past-the-end will pick up junk.
I recommend using the "default" output instead of "elmo", which does the intended reduction. Please see the module documentation.
edited Mar 26 at 16:34
answered Mar 26 at 10:02
arnoegwarnoegw
40226
40226
x1 = array([[ 0.05517201, -0.02187633, -0.17496817, ..., -0.36848053,0.09267851, 0.23179102]], dtype=float32) and z1 = array([[ 0.05517215, -0.02187647, -0.17496812, ..., -0.36848068, 0.09267855, 0.23179094], [-0.00665377, 0.12139908, -0.1935362 , ..., -0.08462355,0.07242572, 0.19882451]], dtype=float32) .Using ["elmo"], gave me different result(please observe in the last decimals) and this keeps on changing as you increase the list size i.e, we get different vectors for one sentence.
– Karanam Krishna
Mar 26 at 11:20
Also why does z1[1] differ with y1 ? The vector representation should be same for a sentence. The sentences(strings) in z = [x,y,...] are independent (consider the case of analyzing different tweets ). So irrespective of the size or the strings present in z , that should not affect the elmo vectors ,right ? I tried changing trainable = False also,but did not work
– Karanam Krishna
Mar 26 at 11:34
Re "last decimals": there's a lot of math happening here in single precision. I'm not exactly sure how the different examples in the batch interact with each other, but a match in five significant digits meets my bar of "equal" for deep learning.
– arnoegw
Mar 26 at 16:26
Re "differ with y1": Please see the module documentation for the significance of input lengths, and the recommendation to use output "default". When a recurrent neural network processes a batch of sequences with unequal lengths, it iterates up to the maximum length and leaves it to a postprocessing ("masking") step to delete the past-the-end outputs of shorter sequences.
– arnoegw
Mar 26 at 16:32
Re "last decimals" : Since there is a difference of only one string, the difference in the values is not considerable. But if (say) z = [list of 1000 strings], the values do change from the first decimal point itself. I did lot of checks. I am repeating 'we get different vectors for each sentence ' and why is that ? Are the weights getting trained (but i am not training the model,just extracting vectors from pretrained model ) ?
– Karanam Krishna
Mar 27 at 5:32
add a comment |
x1 = array([[ 0.05517201, -0.02187633, -0.17496817, ..., -0.36848053,0.09267851, 0.23179102]], dtype=float32) and z1 = array([[ 0.05517215, -0.02187647, -0.17496812, ..., -0.36848068, 0.09267855, 0.23179094], [-0.00665377, 0.12139908, -0.1935362 , ..., -0.08462355,0.07242572, 0.19882451]], dtype=float32) .Using ["elmo"], gave me different result(please observe in the last decimals) and this keeps on changing as you increase the list size i.e, we get different vectors for one sentence.
– Karanam Krishna
Mar 26 at 11:20
Also why does z1[1] differ with y1 ? The vector representation should be same for a sentence. The sentences(strings) in z = [x,y,...] are independent (consider the case of analyzing different tweets ). So irrespective of the size or the strings present in z , that should not affect the elmo vectors ,right ? I tried changing trainable = False also,but did not work
– Karanam Krishna
Mar 26 at 11:34
Re "last decimals": there's a lot of math happening here in single precision. I'm not exactly sure how the different examples in the batch interact with each other, but a match in five significant digits meets my bar of "equal" for deep learning.
– arnoegw
Mar 26 at 16:26
Re "differ with y1": Please see the module documentation for the significance of input lengths, and the recommendation to use output "default". When a recurrent neural network processes a batch of sequences with unequal lengths, it iterates up to the maximum length and leaves it to a postprocessing ("masking") step to delete the past-the-end outputs of shorter sequences.
– arnoegw
Mar 26 at 16:32
Re "last decimals" : Since there is a difference of only one string, the difference in the values is not considerable. But if (say) z = [list of 1000 strings], the values do change from the first decimal point itself. I did lot of checks. I am repeating 'we get different vectors for each sentence ' and why is that ? Are the weights getting trained (but i am not training the model,just extracting vectors from pretrained model ) ?
– Karanam Krishna
Mar 27 at 5:32
x1 = array([[ 0.05517201, -0.02187633, -0.17496817, ..., -0.36848053,0.09267851, 0.23179102]], dtype=float32) and z1 = array([[ 0.05517215, -0.02187647, -0.17496812, ..., -0.36848068, 0.09267855, 0.23179094], [-0.00665377, 0.12139908, -0.1935362 , ..., -0.08462355,0.07242572, 0.19882451]], dtype=float32) .Using ["elmo"], gave me different result(please observe in the last decimals) and this keeps on changing as you increase the list size i.e, we get different vectors for one sentence.
– Karanam Krishna
Mar 26 at 11:20
x1 = array([[ 0.05517201, -0.02187633, -0.17496817, ..., -0.36848053,0.09267851, 0.23179102]], dtype=float32) and z1 = array([[ 0.05517215, -0.02187647, -0.17496812, ..., -0.36848068, 0.09267855, 0.23179094], [-0.00665377, 0.12139908, -0.1935362 , ..., -0.08462355,0.07242572, 0.19882451]], dtype=float32) .Using ["elmo"], gave me different result(please observe in the last decimals) and this keeps on changing as you increase the list size i.e, we get different vectors for one sentence.
– Karanam Krishna
Mar 26 at 11:20
Also why does z1[1] differ with y1 ? The vector representation should be same for a sentence. The sentences(strings) in z = [x,y,...] are independent (consider the case of analyzing different tweets ). So irrespective of the size or the strings present in z , that should not affect the elmo vectors ,right ? I tried changing trainable = False also,but did not work
– Karanam Krishna
Mar 26 at 11:34
Also why does z1[1] differ with y1 ? The vector representation should be same for a sentence. The sentences(strings) in z = [x,y,...] are independent (consider the case of analyzing different tweets ). So irrespective of the size or the strings present in z , that should not affect the elmo vectors ,right ? I tried changing trainable = False also,but did not work
– Karanam Krishna
Mar 26 at 11:34
Re "last decimals": there's a lot of math happening here in single precision. I'm not exactly sure how the different examples in the batch interact with each other, but a match in five significant digits meets my bar of "equal" for deep learning.
– arnoegw
Mar 26 at 16:26
Re "last decimals": there's a lot of math happening here in single precision. I'm not exactly sure how the different examples in the batch interact with each other, but a match in five significant digits meets my bar of "equal" for deep learning.
– arnoegw
Mar 26 at 16:26
Re "differ with y1": Please see the module documentation for the significance of input lengths, and the recommendation to use output "default". When a recurrent neural network processes a batch of sequences with unequal lengths, it iterates up to the maximum length and leaves it to a postprocessing ("masking") step to delete the past-the-end outputs of shorter sequences.
– arnoegw
Mar 26 at 16:32
Re "differ with y1": Please see the module documentation for the significance of input lengths, and the recommendation to use output "default". When a recurrent neural network processes a batch of sequences with unequal lengths, it iterates up to the maximum length and leaves it to a postprocessing ("masking") step to delete the past-the-end outputs of shorter sequences.
– arnoegw
Mar 26 at 16:32
Re "last decimals" : Since there is a difference of only one string, the difference in the values is not considerable. But if (say) z = [list of 1000 strings], the values do change from the first decimal point itself. I did lot of checks. I am repeating 'we get different vectors for each sentence ' and why is that ? Are the weights getting trained (but i am not training the model,just extracting vectors from pretrained model ) ?
– Karanam Krishna
Mar 27 at 5:32
Re "last decimals" : Since there is a difference of only one string, the difference in the values is not considerable. But if (say) z = [list of 1000 strings], the values do change from the first decimal point itself. I did lot of checks. I am repeating 'we get different vectors for each sentence ' and why is that ? Are the weights getting trained (but i am not training the model,just extracting vectors from pretrained model ) ?
– Karanam Krishna
Mar 27 at 5:32
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55334020%2foutput-from-elmo-pretrained-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown