GCP PubSub - How to enqueue asynchronous message?Batching PubSub requestsHow to get messages of a subscritpion on the cloud pub/sub?Google Cloud Functions & PubSub - delay after idlingGCP PubSub: Synchronous Pull Subscriber in Python?Retry pubsub messages using Firebase functionsMonitoring and Flushing a PubSub Batch Publisher QueueGCP Pubsub high latency on low message/secHow to publish real time messages on GCP pubsub topic from on premise NiFi workflowsynchronous pull pubsub node.jsNodeJS GCP pubsub publishing error: t.topic(…).publish is not a functionGo GCP Cloud PubSub not batch publishing messages

How to justify getting additional team member when the current team is doing well?

A famous scholar sent me an unpublished draft of hers. Then she died. I think her work should be published. What should I do?

We are on WHV, my boyfriend was in a small collision, we are leaving in 2 weeks what happens if we don’t pay the damages?

What secular civic space would pioneers build for small frontier towns?

Is it acceptable to say that a reviewer's concern is not going to be addressed because then the paper would be too long?

Which lens has the same capability of lens mounted in Nikon P1000?

Why does my browser attempt to download pages from http://clhs.lisp.se instead of viewing them normally?

How can this Stack Exchange site have an animated favicon?

Why weren't the Death Star plans transmitted electronically?

Medic abilities

How should I answer custom and border protection questions if I'm a returning citizen that hasn't been in the country for almost a decade?

Help in drawing resonance structures in case of polybasic acids

Neural Network vs regression

Why did the Soviet Union not "grant" Inner Mongolia to Mongolia after World War Two?

Why isn't there armor to protect from spells in the Potterverse?

Subverting the emotional woman and stoic man trope

Why was Logo created?

Why, even after his imprisonment, people keep calling Hannibal Lecter "Doctor"?

What are examples of EU policies that are beneficial for one EU country, disadvantagious for another?

"until mine is on tight" is a idiom?

If a spaceship ran out of fuel somewhere in space between Earth and Mars, does it slowly drift off to the Sun?

There are 51 natural numbers between 1-100, prove that there are 2 numbers such that the difference between them equals to 5

Why does the leading tone (G#) go to E rather than A in this example?

Convert a string of digits from words to an integer



GCP PubSub - How to enqueue asynchronous message?


Batching PubSub requestsHow to get messages of a subscritpion on the cloud pub/sub?Google Cloud Functions & PubSub - delay after idlingGCP PubSub: Synchronous Pull Subscriber in Python?Retry pubsub messages using Firebase functionsMonitoring and Flushing a PubSub Batch Publisher QueueGCP Pubsub high latency on low message/secHow to publish real time messages on GCP pubsub topic from on premise NiFi workflowsynchronous pull pubsub node.jsNodeJS GCP pubsub publishing error: t.topic(…).publish is not a functionGo GCP Cloud PubSub not batch publishing messages






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I would like to have information about the setting of the publisher in the pubsub environment of gcp. I would like to enqueue messages that will be consumed via a google function. To achieve this, the publication will trigger when a number of messages is reached or from a certain time.



I set the topic as follows :



topic.PublishSettings = pubsub.PublishSettings
ByteThreshold: 1e6, // Publish a batch when its size in bytes reaches this value. (1e6 = 1Mo)
CountThreshold: 100, // Publish a batch when it has this many messages.
DelayThreshold: 10 * time.Second, // Publish a non-empty batch after this delay has passed.



When I call the publish function, I have a 10 second delay on each call. Messages are not added to the queue ...



for _, v := range list 
ctx := context.Background()
res := a.Topic.Publish(ctx, &pubsub.MessageData: v)

// Block until the result is returned and a server-generated
// ID is returned for the published message.
serverID, err = res.Get(ctx)
if err != nil
return "", err




Someone can help me ?



Cheers










share|improve this question


























  • When you say "I have a 10 second delay on each call," do you mean that res.Get is returning after 10 seconds? When you say "Messages are not added to the queue," what do you mean? res.Get is not returning? It is returning an error? Your subscriber is not receiving the message? Additionally, what does "stack messages" mean? You want the messages all in a single batch that is processed in a Cloud Function as a unit?

    – Kamal Aboul-Hosn
    Mar 28 at 18:48












  • No, there is a delay of 10 sec between each publication. res.Get is returning normaly. I want to batch all requests as I can see here (stackoverflow.com/questions/49070836/batching-pubsub-requests). Then I will trigger cloud function (subscriber) to minimize the costs of process messages. I understood that the 3 messages for example; were put in the queue, and after 10 seconds they had to be published

    – anthony44
    Mar 28 at 19:04


















0















I would like to have information about the setting of the publisher in the pubsub environment of gcp. I would like to enqueue messages that will be consumed via a google function. To achieve this, the publication will trigger when a number of messages is reached or from a certain time.



I set the topic as follows :



topic.PublishSettings = pubsub.PublishSettings
ByteThreshold: 1e6, // Publish a batch when its size in bytes reaches this value. (1e6 = 1Mo)
CountThreshold: 100, // Publish a batch when it has this many messages.
DelayThreshold: 10 * time.Second, // Publish a non-empty batch after this delay has passed.



When I call the publish function, I have a 10 second delay on each call. Messages are not added to the queue ...



for _, v := range list 
ctx := context.Background()
res := a.Topic.Publish(ctx, &pubsub.MessageData: v)

// Block until the result is returned and a server-generated
// ID is returned for the published message.
serverID, err = res.Get(ctx)
if err != nil
return "", err




Someone can help me ?



Cheers










share|improve this question


























  • When you say "I have a 10 second delay on each call," do you mean that res.Get is returning after 10 seconds? When you say "Messages are not added to the queue," what do you mean? res.Get is not returning? It is returning an error? Your subscriber is not receiving the message? Additionally, what does "stack messages" mean? You want the messages all in a single batch that is processed in a Cloud Function as a unit?

    – Kamal Aboul-Hosn
    Mar 28 at 18:48












  • No, there is a delay of 10 sec between each publication. res.Get is returning normaly. I want to batch all requests as I can see here (stackoverflow.com/questions/49070836/batching-pubsub-requests). Then I will trigger cloud function (subscriber) to minimize the costs of process messages. I understood that the 3 messages for example; were put in the queue, and after 10 seconds they had to be published

    – anthony44
    Mar 28 at 19:04














0












0








0








I would like to have information about the setting of the publisher in the pubsub environment of gcp. I would like to enqueue messages that will be consumed via a google function. To achieve this, the publication will trigger when a number of messages is reached or from a certain time.



I set the topic as follows :



topic.PublishSettings = pubsub.PublishSettings
ByteThreshold: 1e6, // Publish a batch when its size in bytes reaches this value. (1e6 = 1Mo)
CountThreshold: 100, // Publish a batch when it has this many messages.
DelayThreshold: 10 * time.Second, // Publish a non-empty batch after this delay has passed.



When I call the publish function, I have a 10 second delay on each call. Messages are not added to the queue ...



for _, v := range list 
ctx := context.Background()
res := a.Topic.Publish(ctx, &pubsub.MessageData: v)

// Block until the result is returned and a server-generated
// ID is returned for the published message.
serverID, err = res.Get(ctx)
if err != nil
return "", err




Someone can help me ?



Cheers










share|improve this question
















I would like to have information about the setting of the publisher in the pubsub environment of gcp. I would like to enqueue messages that will be consumed via a google function. To achieve this, the publication will trigger when a number of messages is reached or from a certain time.



I set the topic as follows :



topic.PublishSettings = pubsub.PublishSettings
ByteThreshold: 1e6, // Publish a batch when its size in bytes reaches this value. (1e6 = 1Mo)
CountThreshold: 100, // Publish a batch when it has this many messages.
DelayThreshold: 10 * time.Second, // Publish a non-empty batch after this delay has passed.



When I call the publish function, I have a 10 second delay on each call. Messages are not added to the queue ...



for _, v := range list 
ctx := context.Background()
res := a.Topic.Publish(ctx, &pubsub.MessageData: v)

// Block until the result is returned and a server-generated
// ID is returned for the published message.
serverID, err = res.Get(ctx)
if err != nil
return "", err




Someone can help me ?



Cheers







google-cloud-pubsub






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 28 at 19:21







anthony44

















asked Mar 28 at 18:34









anthony44anthony44

751 silver badge10 bronze badges




751 silver badge10 bronze badges















  • When you say "I have a 10 second delay on each call," do you mean that res.Get is returning after 10 seconds? When you say "Messages are not added to the queue," what do you mean? res.Get is not returning? It is returning an error? Your subscriber is not receiving the message? Additionally, what does "stack messages" mean? You want the messages all in a single batch that is processed in a Cloud Function as a unit?

    – Kamal Aboul-Hosn
    Mar 28 at 18:48












  • No, there is a delay of 10 sec between each publication. res.Get is returning normaly. I want to batch all requests as I can see here (stackoverflow.com/questions/49070836/batching-pubsub-requests). Then I will trigger cloud function (subscriber) to minimize the costs of process messages. I understood that the 3 messages for example; were put in the queue, and after 10 seconds they had to be published

    – anthony44
    Mar 28 at 19:04


















  • When you say "I have a 10 second delay on each call," do you mean that res.Get is returning after 10 seconds? When you say "Messages are not added to the queue," what do you mean? res.Get is not returning? It is returning an error? Your subscriber is not receiving the message? Additionally, what does "stack messages" mean? You want the messages all in a single batch that is processed in a Cloud Function as a unit?

    – Kamal Aboul-Hosn
    Mar 28 at 18:48












  • No, there is a delay of 10 sec between each publication. res.Get is returning normaly. I want to batch all requests as I can see here (stackoverflow.com/questions/49070836/batching-pubsub-requests). Then I will trigger cloud function (subscriber) to minimize the costs of process messages. I understood that the 3 messages for example; were put in the queue, and after 10 seconds they had to be published

    – anthony44
    Mar 28 at 19:04

















When you say "I have a 10 second delay on each call," do you mean that res.Get is returning after 10 seconds? When you say "Messages are not added to the queue," what do you mean? res.Get is not returning? It is returning an error? Your subscriber is not receiving the message? Additionally, what does "stack messages" mean? You want the messages all in a single batch that is processed in a Cloud Function as a unit?

– Kamal Aboul-Hosn
Mar 28 at 18:48






When you say "I have a 10 second delay on each call," do you mean that res.Get is returning after 10 seconds? When you say "Messages are not added to the queue," what do you mean? res.Get is not returning? It is returning an error? Your subscriber is not receiving the message? Additionally, what does "stack messages" mean? You want the messages all in a single batch that is processed in a Cloud Function as a unit?

– Kamal Aboul-Hosn
Mar 28 at 18:48














No, there is a delay of 10 sec between each publication. res.Get is returning normaly. I want to batch all requests as I can see here (stackoverflow.com/questions/49070836/batching-pubsub-requests). Then I will trigger cloud function (subscriber) to minimize the costs of process messages. I understood that the 3 messages for example; were put in the queue, and after 10 seconds they had to be published

– anthony44
Mar 28 at 19:04






No, there is a delay of 10 sec between each publication. res.Get is returning normaly. I want to batch all requests as I can see here (stackoverflow.com/questions/49070836/batching-pubsub-requests). Then I will trigger cloud function (subscriber) to minimize the costs of process messages. I understood that the 3 messages for example; were put in the queue, and after 10 seconds they had to be published

– anthony44
Mar 28 at 19:04













1 Answer
1






active

oldest

votes


















1
















Batching the publisher side is designed to allow for more cost efficiency when sending messages to Google Cloud Pub/Sub. Given that the minimum billing unit for the service is 1KB, it can be cheaper to send multiple messages in the same Publish request. For example, sending two 0.5KB messages as separate Publish requests would result in being changed for sending 2KB of data (1KB for each). If one were to batch that into a single Publish request, it would be charged as 1KB of data.



The tradeoff with batching is latency: in order to fill up batches, the publisher has to wait to receive more messages to batch together. The three batching properties (ByteThreshold, CountThreshold, and DelayThreshold) allow one to control the level of that tradeoff. The first two properties control how much data or how many messages we put in a single batch. The last property controls how long the publisher should wait to send a batch.



As an example, imagine you have CountThreshold set to 100. If you are publishing few messages, it could take awhile to receive 100 messages to send as a batch. This means that the latency for messages in that batch will be higher because they are sitting in the client waiting to be sent. With DelayThreshold set to 10 seconds, that means that a batch would be sent if it had 100 messages in it or if the first message in the batch was received at least 10 seconds ago. Therefore, this is putting a limit on the amount of latency to introduce in order to have more data in an individual batch.



The code as you have it is going to result in batches with only a single message that each take 10 seconds to publish. The reason is the call to res.Get(ctx), which will block until the message has been successfully sent to the server. With CountThreshold set to 100 and DelayThreshold set to 10 seconds, the sequence that is happening inside your loop is:



  1. A call to Publish puts a message in a batch to publish.

  2. That batch is waiting to receive 99 more messages or for 10 seconds to pass before sending the batch to the server.

  3. The code is waiting for this message to be sent to the server and return with a serverID.

  4. Given the code doesn't call Publish again until res.Get(ctx) returns, it waits 10 seconds to send the batch.


  5. res.Get(ctx) returns with a serverID for the single message.

  6. Go back to 1.

If you actually want to batch messages together, you can't call res.Get(ctx) before the next Publish call. You'll want to either call publish inside a goroutine (so one routine per message) or you'll want to amass the res objects in a list and then call Get on them outside the loop, e.g.:



 var res []*PublishResult
ctx := context.Background()
for _, v := range list
res = append(res, a.Topic.Publish(ctx, &pubsub.MessageData: v))

for _, r := range res
serverID, err = r.Get(ctx)
if err != nil
return "", err




Something to keep in mind is that batching will optimize cost on the publish side, not on the subscribe side. Cloud Functions is built with push subscriptions. This means that messages must be delivered to the subscriber one at a time (since the response code is what is used to ack or nack each message), which means there is no batching of messages delivered to the subscriber.






share|improve this answer

























  • OK thanks a lot. In view of your explanations, as my publisher will be outside google cloud, I do not know if I better to optimize the costs by setting up the batch. I would say no. Same question, on the side of subscriber, is it advantageous to set up a google function or to deploy a container docker?

    – anthony44
    Mar 28 at 22:04






  • 1





    It really depends on your use case. Cloud Functions have the advantage of being very easy to integrate with Pub/Sub via the push subscriptions. They are great for stateless processing where you don't need to maintain any data across delivery of messages. But it means you have less control over delivery to the subscriber, e.g., pull subscriber client library supports flow control, which allows one to have more exact control over the actual delivery of messages.

    – Kamal Aboul-Hosn
    Mar 29 at 11:29













Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);














draft saved

draft discarded
















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55404670%2fgcp-pubsub-how-to-enqueue-asynchronous-message%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1
















Batching the publisher side is designed to allow for more cost efficiency when sending messages to Google Cloud Pub/Sub. Given that the minimum billing unit for the service is 1KB, it can be cheaper to send multiple messages in the same Publish request. For example, sending two 0.5KB messages as separate Publish requests would result in being changed for sending 2KB of data (1KB for each). If one were to batch that into a single Publish request, it would be charged as 1KB of data.



The tradeoff with batching is latency: in order to fill up batches, the publisher has to wait to receive more messages to batch together. The three batching properties (ByteThreshold, CountThreshold, and DelayThreshold) allow one to control the level of that tradeoff. The first two properties control how much data or how many messages we put in a single batch. The last property controls how long the publisher should wait to send a batch.



As an example, imagine you have CountThreshold set to 100. If you are publishing few messages, it could take awhile to receive 100 messages to send as a batch. This means that the latency for messages in that batch will be higher because they are sitting in the client waiting to be sent. With DelayThreshold set to 10 seconds, that means that a batch would be sent if it had 100 messages in it or if the first message in the batch was received at least 10 seconds ago. Therefore, this is putting a limit on the amount of latency to introduce in order to have more data in an individual batch.



The code as you have it is going to result in batches with only a single message that each take 10 seconds to publish. The reason is the call to res.Get(ctx), which will block until the message has been successfully sent to the server. With CountThreshold set to 100 and DelayThreshold set to 10 seconds, the sequence that is happening inside your loop is:



  1. A call to Publish puts a message in a batch to publish.

  2. That batch is waiting to receive 99 more messages or for 10 seconds to pass before sending the batch to the server.

  3. The code is waiting for this message to be sent to the server and return with a serverID.

  4. Given the code doesn't call Publish again until res.Get(ctx) returns, it waits 10 seconds to send the batch.


  5. res.Get(ctx) returns with a serverID for the single message.

  6. Go back to 1.

If you actually want to batch messages together, you can't call res.Get(ctx) before the next Publish call. You'll want to either call publish inside a goroutine (so one routine per message) or you'll want to amass the res objects in a list and then call Get on them outside the loop, e.g.:



 var res []*PublishResult
ctx := context.Background()
for _, v := range list
res = append(res, a.Topic.Publish(ctx, &pubsub.MessageData: v))

for _, r := range res
serverID, err = r.Get(ctx)
if err != nil
return "", err




Something to keep in mind is that batching will optimize cost on the publish side, not on the subscribe side. Cloud Functions is built with push subscriptions. This means that messages must be delivered to the subscriber one at a time (since the response code is what is used to ack or nack each message), which means there is no batching of messages delivered to the subscriber.






share|improve this answer

























  • OK thanks a lot. In view of your explanations, as my publisher will be outside google cloud, I do not know if I better to optimize the costs by setting up the batch. I would say no. Same question, on the side of subscriber, is it advantageous to set up a google function or to deploy a container docker?

    – anthony44
    Mar 28 at 22:04






  • 1





    It really depends on your use case. Cloud Functions have the advantage of being very easy to integrate with Pub/Sub via the push subscriptions. They are great for stateless processing where you don't need to maintain any data across delivery of messages. But it means you have less control over delivery to the subscriber, e.g., pull subscriber client library supports flow control, which allows one to have more exact control over the actual delivery of messages.

    – Kamal Aboul-Hosn
    Mar 29 at 11:29















1
















Batching the publisher side is designed to allow for more cost efficiency when sending messages to Google Cloud Pub/Sub. Given that the minimum billing unit for the service is 1KB, it can be cheaper to send multiple messages in the same Publish request. For example, sending two 0.5KB messages as separate Publish requests would result in being changed for sending 2KB of data (1KB for each). If one were to batch that into a single Publish request, it would be charged as 1KB of data.



The tradeoff with batching is latency: in order to fill up batches, the publisher has to wait to receive more messages to batch together. The three batching properties (ByteThreshold, CountThreshold, and DelayThreshold) allow one to control the level of that tradeoff. The first two properties control how much data or how many messages we put in a single batch. The last property controls how long the publisher should wait to send a batch.



As an example, imagine you have CountThreshold set to 100. If you are publishing few messages, it could take awhile to receive 100 messages to send as a batch. This means that the latency for messages in that batch will be higher because they are sitting in the client waiting to be sent. With DelayThreshold set to 10 seconds, that means that a batch would be sent if it had 100 messages in it or if the first message in the batch was received at least 10 seconds ago. Therefore, this is putting a limit on the amount of latency to introduce in order to have more data in an individual batch.



The code as you have it is going to result in batches with only a single message that each take 10 seconds to publish. The reason is the call to res.Get(ctx), which will block until the message has been successfully sent to the server. With CountThreshold set to 100 and DelayThreshold set to 10 seconds, the sequence that is happening inside your loop is:



  1. A call to Publish puts a message in a batch to publish.

  2. That batch is waiting to receive 99 more messages or for 10 seconds to pass before sending the batch to the server.

  3. The code is waiting for this message to be sent to the server and return with a serverID.

  4. Given the code doesn't call Publish again until res.Get(ctx) returns, it waits 10 seconds to send the batch.


  5. res.Get(ctx) returns with a serverID for the single message.

  6. Go back to 1.

If you actually want to batch messages together, you can't call res.Get(ctx) before the next Publish call. You'll want to either call publish inside a goroutine (so one routine per message) or you'll want to amass the res objects in a list and then call Get on them outside the loop, e.g.:



 var res []*PublishResult
ctx := context.Background()
for _, v := range list
res = append(res, a.Topic.Publish(ctx, &pubsub.MessageData: v))

for _, r := range res
serverID, err = r.Get(ctx)
if err != nil
return "", err




Something to keep in mind is that batching will optimize cost on the publish side, not on the subscribe side. Cloud Functions is built with push subscriptions. This means that messages must be delivered to the subscriber one at a time (since the response code is what is used to ack or nack each message), which means there is no batching of messages delivered to the subscriber.






share|improve this answer

























  • OK thanks a lot. In view of your explanations, as my publisher will be outside google cloud, I do not know if I better to optimize the costs by setting up the batch. I would say no. Same question, on the side of subscriber, is it advantageous to set up a google function or to deploy a container docker?

    – anthony44
    Mar 28 at 22:04






  • 1





    It really depends on your use case. Cloud Functions have the advantage of being very easy to integrate with Pub/Sub via the push subscriptions. They are great for stateless processing where you don't need to maintain any data across delivery of messages. But it means you have less control over delivery to the subscriber, e.g., pull subscriber client library supports flow control, which allows one to have more exact control over the actual delivery of messages.

    – Kamal Aboul-Hosn
    Mar 29 at 11:29













1














1










1









Batching the publisher side is designed to allow for more cost efficiency when sending messages to Google Cloud Pub/Sub. Given that the minimum billing unit for the service is 1KB, it can be cheaper to send multiple messages in the same Publish request. For example, sending two 0.5KB messages as separate Publish requests would result in being changed for sending 2KB of data (1KB for each). If one were to batch that into a single Publish request, it would be charged as 1KB of data.



The tradeoff with batching is latency: in order to fill up batches, the publisher has to wait to receive more messages to batch together. The three batching properties (ByteThreshold, CountThreshold, and DelayThreshold) allow one to control the level of that tradeoff. The first two properties control how much data or how many messages we put in a single batch. The last property controls how long the publisher should wait to send a batch.



As an example, imagine you have CountThreshold set to 100. If you are publishing few messages, it could take awhile to receive 100 messages to send as a batch. This means that the latency for messages in that batch will be higher because they are sitting in the client waiting to be sent. With DelayThreshold set to 10 seconds, that means that a batch would be sent if it had 100 messages in it or if the first message in the batch was received at least 10 seconds ago. Therefore, this is putting a limit on the amount of latency to introduce in order to have more data in an individual batch.



The code as you have it is going to result in batches with only a single message that each take 10 seconds to publish. The reason is the call to res.Get(ctx), which will block until the message has been successfully sent to the server. With CountThreshold set to 100 and DelayThreshold set to 10 seconds, the sequence that is happening inside your loop is:



  1. A call to Publish puts a message in a batch to publish.

  2. That batch is waiting to receive 99 more messages or for 10 seconds to pass before sending the batch to the server.

  3. The code is waiting for this message to be sent to the server and return with a serverID.

  4. Given the code doesn't call Publish again until res.Get(ctx) returns, it waits 10 seconds to send the batch.


  5. res.Get(ctx) returns with a serverID for the single message.

  6. Go back to 1.

If you actually want to batch messages together, you can't call res.Get(ctx) before the next Publish call. You'll want to either call publish inside a goroutine (so one routine per message) or you'll want to amass the res objects in a list and then call Get on them outside the loop, e.g.:



 var res []*PublishResult
ctx := context.Background()
for _, v := range list
res = append(res, a.Topic.Publish(ctx, &pubsub.MessageData: v))

for _, r := range res
serverID, err = r.Get(ctx)
if err != nil
return "", err




Something to keep in mind is that batching will optimize cost on the publish side, not on the subscribe side. Cloud Functions is built with push subscriptions. This means that messages must be delivered to the subscriber one at a time (since the response code is what is used to ack or nack each message), which means there is no batching of messages delivered to the subscriber.






share|improve this answer













Batching the publisher side is designed to allow for more cost efficiency when sending messages to Google Cloud Pub/Sub. Given that the minimum billing unit for the service is 1KB, it can be cheaper to send multiple messages in the same Publish request. For example, sending two 0.5KB messages as separate Publish requests would result in being changed for sending 2KB of data (1KB for each). If one were to batch that into a single Publish request, it would be charged as 1KB of data.



The tradeoff with batching is latency: in order to fill up batches, the publisher has to wait to receive more messages to batch together. The three batching properties (ByteThreshold, CountThreshold, and DelayThreshold) allow one to control the level of that tradeoff. The first two properties control how much data or how many messages we put in a single batch. The last property controls how long the publisher should wait to send a batch.



As an example, imagine you have CountThreshold set to 100. If you are publishing few messages, it could take awhile to receive 100 messages to send as a batch. This means that the latency for messages in that batch will be higher because they are sitting in the client waiting to be sent. With DelayThreshold set to 10 seconds, that means that a batch would be sent if it had 100 messages in it or if the first message in the batch was received at least 10 seconds ago. Therefore, this is putting a limit on the amount of latency to introduce in order to have more data in an individual batch.



The code as you have it is going to result in batches with only a single message that each take 10 seconds to publish. The reason is the call to res.Get(ctx), which will block until the message has been successfully sent to the server. With CountThreshold set to 100 and DelayThreshold set to 10 seconds, the sequence that is happening inside your loop is:



  1. A call to Publish puts a message in a batch to publish.

  2. That batch is waiting to receive 99 more messages or for 10 seconds to pass before sending the batch to the server.

  3. The code is waiting for this message to be sent to the server and return with a serverID.

  4. Given the code doesn't call Publish again until res.Get(ctx) returns, it waits 10 seconds to send the batch.


  5. res.Get(ctx) returns with a serverID for the single message.

  6. Go back to 1.

If you actually want to batch messages together, you can't call res.Get(ctx) before the next Publish call. You'll want to either call publish inside a goroutine (so one routine per message) or you'll want to amass the res objects in a list and then call Get on them outside the loop, e.g.:



 var res []*PublishResult
ctx := context.Background()
for _, v := range list
res = append(res, a.Topic.Publish(ctx, &pubsub.MessageData: v))

for _, r := range res
serverID, err = r.Get(ctx)
if err != nil
return "", err




Something to keep in mind is that batching will optimize cost on the publish side, not on the subscribe side. Cloud Functions is built with push subscriptions. This means that messages must be delivered to the subscriber one at a time (since the response code is what is used to ack or nack each message), which means there is no batching of messages delivered to the subscriber.







share|improve this answer












share|improve this answer



share|improve this answer










answered Mar 28 at 21:20









Kamal Aboul-HosnKamal Aboul-Hosn

5,66817 silver badges24 bronze badges




5,66817 silver badges24 bronze badges















  • OK thanks a lot. In view of your explanations, as my publisher will be outside google cloud, I do not know if I better to optimize the costs by setting up the batch. I would say no. Same question, on the side of subscriber, is it advantageous to set up a google function or to deploy a container docker?

    – anthony44
    Mar 28 at 22:04






  • 1





    It really depends on your use case. Cloud Functions have the advantage of being very easy to integrate with Pub/Sub via the push subscriptions. They are great for stateless processing where you don't need to maintain any data across delivery of messages. But it means you have less control over delivery to the subscriber, e.g., pull subscriber client library supports flow control, which allows one to have more exact control over the actual delivery of messages.

    – Kamal Aboul-Hosn
    Mar 29 at 11:29

















  • OK thanks a lot. In view of your explanations, as my publisher will be outside google cloud, I do not know if I better to optimize the costs by setting up the batch. I would say no. Same question, on the side of subscriber, is it advantageous to set up a google function or to deploy a container docker?

    – anthony44
    Mar 28 at 22:04






  • 1





    It really depends on your use case. Cloud Functions have the advantage of being very easy to integrate with Pub/Sub via the push subscriptions. They are great for stateless processing where you don't need to maintain any data across delivery of messages. But it means you have less control over delivery to the subscriber, e.g., pull subscriber client library supports flow control, which allows one to have more exact control over the actual delivery of messages.

    – Kamal Aboul-Hosn
    Mar 29 at 11:29
















OK thanks a lot. In view of your explanations, as my publisher will be outside google cloud, I do not know if I better to optimize the costs by setting up the batch. I would say no. Same question, on the side of subscriber, is it advantageous to set up a google function or to deploy a container docker?

– anthony44
Mar 28 at 22:04





OK thanks a lot. In view of your explanations, as my publisher will be outside google cloud, I do not know if I better to optimize the costs by setting up the batch. I would say no. Same question, on the side of subscriber, is it advantageous to set up a google function or to deploy a container docker?

– anthony44
Mar 28 at 22:04




1




1





It really depends on your use case. Cloud Functions have the advantage of being very easy to integrate with Pub/Sub via the push subscriptions. They are great for stateless processing where you don't need to maintain any data across delivery of messages. But it means you have less control over delivery to the subscriber, e.g., pull subscriber client library supports flow control, which allows one to have more exact control over the actual delivery of messages.

– Kamal Aboul-Hosn
Mar 29 at 11:29





It really depends on your use case. Cloud Functions have the advantage of being very easy to integrate with Pub/Sub via the push subscriptions. They are great for stateless processing where you don't need to maintain any data across delivery of messages. But it means you have less control over delivery to the subscriber, e.g., pull subscriber client library supports flow control, which allows one to have more exact control over the actual delivery of messages.

– Kamal Aboul-Hosn
Mar 29 at 11:29




















draft saved

draft discarded















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55404670%2fgcp-pubsub-how-to-enqueue-asynchronous-message%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현