Mongodb aggregation framework performanceNoSQL (MongoDB) vs Lucene (or Solr) as your databaseHow to query MongoDB with “like”?How do I drop a MongoDB database from the command line?$unwind an object in aggregation frameworkMongoDB aggregation comparison: group(), $group and MapReduceAggregation framework on full table scanMongodb aggregation $group followed by $limit for paginationTuning mongoDB lookupMongoDB Aggregation Lookup Match ArrayMongoDB and Aggregation Framework Pipeline Stages

Onenote - Reducing Storage Footprint on PC

How to switch an 80286 from protected to real mode?

Is DC heating faster than AC heating?

monolingual dictionary

How to check a file was encrypted (really & correctly)

How do I get the =LEFT function in excel, to also take the number zero as the first number?

If someone else uploads my GPL'd code to Github without my permission, is that a copyright violation?

Determine Beckett Grading Service (BGS) Final Grade

Unexpected route on a flight from USA to Europe

Does the spell "Silence" affect the caster?

Why do private jets such as Gulfstream fly higher than other civilian jets?

Which genus do I use for neutral expressions in German?

Differentiability of operator norm

Where in ש״ס who one find the adage, “He who suggests the idea should carry it out”?

Does this put me at risk for identity theft?

Add room number to postal address?

Is space radiation a risk for space film photography, and how is this prevented?

Secure my password from unsafe servers

How to dogfight in Elite: Dangerous?

Best way to explain to my boss that I cannot attend a team summit because it is on Rosh Hashana or any other Jewish Holiday

Is Odin inconsistent about the powers of Mjolnir?

The heat content of the products is more than that of the reactant in an ............. reaction

The size of sheafification

What are these circular spots on these Ariane V SRB nozzles?



Mongodb aggregation framework performance


NoSQL (MongoDB) vs Lucene (or Solr) as your databaseHow to query MongoDB with “like”?How do I drop a MongoDB database from the command line?$unwind an object in aggregation frameworkMongoDB aggregation comparison: group(), $group and MapReduceAggregation framework on full table scanMongodb aggregation $group followed by $limit for paginationTuning mongoDB lookupMongoDB Aggregation Lookup Match ArrayMongoDB and Aggregation Framework Pipeline Stages






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I am using mongodb aggregation framework for all the get requests,when i say get request to fetch single document as well as multiple documents.For few requests i have around 90 to 100 stages in my aggregation pipeline($match,$skip,$limit,$lookup,$unwind,$addFields,$group) these are stages what i am maintaining sequentially.If i pass limit more than 50 its giving error like group stage has exceeded the memory limit or else it taking around 60 seconds to execute the query.I my database each and every collection has more than 60,000 documents.How do i resolve this? Does sharding solves my problem?










share|improve this question
































    0















    I am using mongodb aggregation framework for all the get requests,when i say get request to fetch single document as well as multiple documents.For few requests i have around 90 to 100 stages in my aggregation pipeline($match,$skip,$limit,$lookup,$unwind,$addFields,$group) these are stages what i am maintaining sequentially.If i pass limit more than 50 its giving error like group stage has exceeded the memory limit or else it taking around 60 seconds to execute the query.I my database each and every collection has more than 60,000 documents.How do i resolve this? Does sharding solves my problem?










    share|improve this question




























      0












      0








      0








      I am using mongodb aggregation framework for all the get requests,when i say get request to fetch single document as well as multiple documents.For few requests i have around 90 to 100 stages in my aggregation pipeline($match,$skip,$limit,$lookup,$unwind,$addFields,$group) these are stages what i am maintaining sequentially.If i pass limit more than 50 its giving error like group stage has exceeded the memory limit or else it taking around 60 seconds to execute the query.I my database each and every collection has more than 60,000 documents.How do i resolve this? Does sharding solves my problem?










      share|improve this question
















      I am using mongodb aggregation framework for all the get requests,when i say get request to fetch single document as well as multiple documents.For few requests i have around 90 to 100 stages in my aggregation pipeline($match,$skip,$limit,$lookup,$unwind,$addFields,$group) these are stages what i am maintaining sequentially.If i pass limit more than 50 its giving error like group stage has exceeded the memory limit or else it taking around 60 seconds to execute the query.I my database each and every collection has more than 60,000 documents.How do i resolve this? Does sharding solves my problem?







      mongodb






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 27 at 5:20









      Ravi Shankar Bharti

      5,7622 gold badges16 silver badges43 bronze badges




      5,7622 gold badges16 silver badges43 bronze badges










      asked Mar 27 at 5:00









      Akarsh H SAkarsh H S

      13 bronze badges




      13 bronze badges

























          1 Answer
          1






          active

          oldest

          votes


















          0














          Some optimizations for optimizing your query would be:
          1. Try to reduce the number of records to be fetched at the first stage of the pipe line itself.
          2. Create indexes on the fields being queried in the first pipeline, preferably a $match
          3. Indexes will be used in the first stage of the pipeline only.
          4. Sharding will increase throughput at the minor expense of query perf(this also depends)
          Ps: lookup will not work on sharded connections






          share|improve this answer

























          • Also, to overcome memory exceed errors, try using allowdiskuse parameter in aggregation pipeline

            – Anirudh Simha
            Mar 27 at 6:32










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55370097%2fmongodb-aggregation-framework-performance%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Some optimizations for optimizing your query would be:
          1. Try to reduce the number of records to be fetched at the first stage of the pipe line itself.
          2. Create indexes on the fields being queried in the first pipeline, preferably a $match
          3. Indexes will be used in the first stage of the pipeline only.
          4. Sharding will increase throughput at the minor expense of query perf(this also depends)
          Ps: lookup will not work on sharded connections






          share|improve this answer

























          • Also, to overcome memory exceed errors, try using allowdiskuse parameter in aggregation pipeline

            – Anirudh Simha
            Mar 27 at 6:32















          0














          Some optimizations for optimizing your query would be:
          1. Try to reduce the number of records to be fetched at the first stage of the pipe line itself.
          2. Create indexes on the fields being queried in the first pipeline, preferably a $match
          3. Indexes will be used in the first stage of the pipeline only.
          4. Sharding will increase throughput at the minor expense of query perf(this also depends)
          Ps: lookup will not work on sharded connections






          share|improve this answer

























          • Also, to overcome memory exceed errors, try using allowdiskuse parameter in aggregation pipeline

            – Anirudh Simha
            Mar 27 at 6:32













          0












          0








          0







          Some optimizations for optimizing your query would be:
          1. Try to reduce the number of records to be fetched at the first stage of the pipe line itself.
          2. Create indexes on the fields being queried in the first pipeline, preferably a $match
          3. Indexes will be used in the first stage of the pipeline only.
          4. Sharding will increase throughput at the minor expense of query perf(this also depends)
          Ps: lookup will not work on sharded connections






          share|improve this answer













          Some optimizations for optimizing your query would be:
          1. Try to reduce the number of records to be fetched at the first stage of the pipe line itself.
          2. Create indexes on the fields being queried in the first pipeline, preferably a $match
          3. Indexes will be used in the first stage of the pipeline only.
          4. Sharding will increase throughput at the minor expense of query perf(this also depends)
          Ps: lookup will not work on sharded connections







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 27 at 6:30









          Anirudh SimhaAnirudh Simha

          2497 bronze badges




          2497 bronze badges















          • Also, to overcome memory exceed errors, try using allowdiskuse parameter in aggregation pipeline

            – Anirudh Simha
            Mar 27 at 6:32

















          • Also, to overcome memory exceed errors, try using allowdiskuse parameter in aggregation pipeline

            – Anirudh Simha
            Mar 27 at 6:32
















          Also, to overcome memory exceed errors, try using allowdiskuse parameter in aggregation pipeline

          – Anirudh Simha
          Mar 27 at 6:32





          Also, to overcome memory exceed errors, try using allowdiskuse parameter in aggregation pipeline

          – Anirudh Simha
          Mar 27 at 6:32








          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55370097%2fmongodb-aggregation-framework-performance%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현