How does write ahead logging improve IO performance in Postgres?How to reset postgres' primary key sequence when it falls out of sync?How to log PostgreSQL queries?InnoDB Bottleneck: Relaxing ACID to Improve PerformanceHow does PostgreSQL perform writes so much faster than SQLite?On Postgres databases, does relationship constraints degrade performance?Improve performance of first queryDisable WAL in influxdb v0.13 or force flushunderstanding unlogged tables, commits and checkpoints postgresWould it possible that postgres Write-Ahead Log DOUBLE RE-APPLY?Can PostgreSQL be configured so that occasional mass updates can run super-fast?

What prevents a US state from colonizing a smaller state?

Why doesn't SpaceX land boosters in Africa?

How to draw a diagram like this with tikz?

What is the lowest possible AC?

I just started; should I accept a farewell lunch for a coworker I don't know?

Dynamic Sql Query - how to add an int to the code?

A quine of sorts

Journal standards vs. personal standards

How to track mail undetectably?

Why would Dementors torture a Death Eater if they are loyal to Voldemort?

What does 5d4 x 10 gp mean?

What was the first science fiction or fantasy multiple choice book?

Two palindromes are not enough

What are the children of two Muggle-borns called?

Is it OK to throw pebbles and stones in streams, waterfalls, ponds, etc.?

Did the Russian Empire have a claim to Sweden? Was there ever a time where they could have pursued it?

Would skyscrapers tip over if people fell sideways?

Tricolour nonogram

How would one prevent political gerrymandering?

Why are symbols not written in words?

What was the point of separating stdout and stderr?

English idiomatic equivalents of 能骗就骗 (if you can cheat, then cheat)

Why will we fail creating a self sustaining off world colony?

Installed software from source, how to say yum not to install it from package?



How does write ahead logging improve IO performance in Postgres?


How to reset postgres' primary key sequence when it falls out of sync?How to log PostgreSQL queries?InnoDB Bottleneck: Relaxing ACID to Improve PerformanceHow does PostgreSQL perform writes so much faster than SQLite?On Postgres databases, does relationship constraints degrade performance?Improve performance of first queryDisable WAL in influxdb v0.13 or force flushunderstanding unlogged tables, commits and checkpoints postgresWould it possible that postgres Write-Ahead Log DOUBLE RE-APPLY?Can PostgreSQL be configured so that occasional mass updates can run super-fast?













1















I've been reading through the WAL chapter of the Postgres manual and was confused by a portion of the chapter:




Using WAL results in a significantly reduced number of disk writes, because only the log file needs to be flushed to disk to guarantee that a transaction is committed, rather than every data file changed by the transaction.




How is it that continuous writing to WAL more performant than simply writing to the table/index data itself?



As I see it (forgetting for now the resiliency benefits of WAL) postgres need to complete two disk operations; first pg needs to commit to WAL on disk and then you'll still need to change the table data to be consistent with WAL. I'm sure there's a fundamental aspect of this I've misunderstood but it seems like adding an additional step between a client transaction and the and the final state of the table data couldn't actually increase overall performance. Thanks in advance!










share|improve this question




























    1















    I've been reading through the WAL chapter of the Postgres manual and was confused by a portion of the chapter:




    Using WAL results in a significantly reduced number of disk writes, because only the log file needs to be flushed to disk to guarantee that a transaction is committed, rather than every data file changed by the transaction.




    How is it that continuous writing to WAL more performant than simply writing to the table/index data itself?



    As I see it (forgetting for now the resiliency benefits of WAL) postgres need to complete two disk operations; first pg needs to commit to WAL on disk and then you'll still need to change the table data to be consistent with WAL. I'm sure there's a fundamental aspect of this I've misunderstood but it seems like adding an additional step between a client transaction and the and the final state of the table data couldn't actually increase overall performance. Thanks in advance!










    share|improve this question


























      1












      1








      1








      I've been reading through the WAL chapter of the Postgres manual and was confused by a portion of the chapter:




      Using WAL results in a significantly reduced number of disk writes, because only the log file needs to be flushed to disk to guarantee that a transaction is committed, rather than every data file changed by the transaction.




      How is it that continuous writing to WAL more performant than simply writing to the table/index data itself?



      As I see it (forgetting for now the resiliency benefits of WAL) postgres need to complete two disk operations; first pg needs to commit to WAL on disk and then you'll still need to change the table data to be consistent with WAL. I'm sure there's a fundamental aspect of this I've misunderstood but it seems like adding an additional step between a client transaction and the and the final state of the table data couldn't actually increase overall performance. Thanks in advance!










      share|improve this question
















      I've been reading through the WAL chapter of the Postgres manual and was confused by a portion of the chapter:




      Using WAL results in a significantly reduced number of disk writes, because only the log file needs to be flushed to disk to guarantee that a transaction is committed, rather than every data file changed by the transaction.




      How is it that continuous writing to WAL more performant than simply writing to the table/index data itself?



      As I see it (forgetting for now the resiliency benefits of WAL) postgres need to complete two disk operations; first pg needs to commit to WAL on disk and then you'll still need to change the table data to be consistent with WAL. I'm sure there's a fundamental aspect of this I've misunderstood but it seems like adding an additional step between a client transaction and the and the final state of the table data couldn't actually increase overall performance. Thanks in advance!







      postgresql database-performance wal






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 25 at 16:24









      Laurenz Albe

      58.4k11 gold badges41 silver badges62 bronze badges




      58.4k11 gold badges41 silver badges62 bronze badges










      asked Mar 25 at 15:40









      RangerRangerRangerRanger

      1,3122 gold badges9 silver badges26 bronze badges




      1,3122 gold badges9 silver badges26 bronze badges




















          1 Answer
          1






          active

          oldest

          votes


















          1














          You are fundamentally right: the extra writes to the transaction log will per se not reduce the I/O load.



          But a transaction will normally touch several files (tables, indexes etc.). If you force all these files out to storage (“sync”), you will incur more I/O load than if you sync just a single file.



          Of course all these files will have to be written and sync'ed eventually (during a checkpoint), but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once.






          share|improve this answer























          • but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once. Ah, excellent. I think that's where my misunderstanding came from. By waiting for a checkpoint you can make multiple changes to the same file rather than continuously retrieving the file and writing to it. Thank you for the clear and concise answer.

            – RangerRanger
            Mar 26 at 0:57











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55341457%2fhow-does-write-ahead-logging-improve-io-performance-in-postgres%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          You are fundamentally right: the extra writes to the transaction log will per se not reduce the I/O load.



          But a transaction will normally touch several files (tables, indexes etc.). If you force all these files out to storage (“sync”), you will incur more I/O load than if you sync just a single file.



          Of course all these files will have to be written and sync'ed eventually (during a checkpoint), but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once.






          share|improve this answer























          • but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once. Ah, excellent. I think that's where my misunderstanding came from. By waiting for a checkpoint you can make multiple changes to the same file rather than continuously retrieving the file and writing to it. Thank you for the clear and concise answer.

            – RangerRanger
            Mar 26 at 0:57
















          1














          You are fundamentally right: the extra writes to the transaction log will per se not reduce the I/O load.



          But a transaction will normally touch several files (tables, indexes etc.). If you force all these files out to storage (“sync”), you will incur more I/O load than if you sync just a single file.



          Of course all these files will have to be written and sync'ed eventually (during a checkpoint), but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once.






          share|improve this answer























          • but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once. Ah, excellent. I think that's where my misunderstanding came from. By waiting for a checkpoint you can make multiple changes to the same file rather than continuously retrieving the file and writing to it. Thank you for the clear and concise answer.

            – RangerRanger
            Mar 26 at 0:57














          1












          1








          1







          You are fundamentally right: the extra writes to the transaction log will per se not reduce the I/O load.



          But a transaction will normally touch several files (tables, indexes etc.). If you force all these files out to storage (“sync”), you will incur more I/O load than if you sync just a single file.



          Of course all these files will have to be written and sync'ed eventually (during a checkpoint), but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once.






          share|improve this answer













          You are fundamentally right: the extra writes to the transaction log will per se not reduce the I/O load.



          But a transaction will normally touch several files (tables, indexes etc.). If you force all these files out to storage (“sync”), you will incur more I/O load than if you sync just a single file.



          Of course all these files will have to be written and sync'ed eventually (during a checkpoint), but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 25 at 16:23









          Laurenz AlbeLaurenz Albe

          58.4k11 gold badges41 silver badges62 bronze badges




          58.4k11 gold badges41 silver badges62 bronze badges












          • but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once. Ah, excellent. I think that's where my misunderstanding came from. By waiting for a checkpoint you can make multiple changes to the same file rather than continuously retrieving the file and writing to it. Thank you for the clear and concise answer.

            – RangerRanger
            Mar 26 at 0:57


















          • but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once. Ah, excellent. I think that's where my misunderstanding came from. By waiting for a checkpoint you can make multiple changes to the same file rather than continuously retrieving the file and writing to it. Thank you for the clear and concise answer.

            – RangerRanger
            Mar 26 at 0:57

















          but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once. Ah, excellent. I think that's where my misunderstanding came from. By waiting for a checkpoint you can make multiple changes to the same file rather than continuously retrieving the file and writing to it. Thank you for the clear and concise answer.

          – RangerRanger
          Mar 26 at 0:57






          but often the same data are modified several times between two checkpoints, and then the corresponding files will have to be sync'ed only once. Ah, excellent. I think that's where my misunderstanding came from. By waiting for a checkpoint you can make multiple changes to the same file rather than continuously retrieving the file and writing to it. Thank you for the clear and concise answer.

          – RangerRanger
          Mar 26 at 0:57







          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55341457%2fhow-does-write-ahead-logging-improve-io-performance-in-postgres%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

          Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript