Specify minimum number of generated files from Hive insertMore efficient query to avoid OutOfMemoryError in HiveSorted Table in Hive (ORC file format)Issue in Hive Query due to memoryAWS EMR Auto ScalingHive dynamic partitions generate multiple filesNumber Of Mappers Spawned In Pig And HivePresto cluster cannot run queries against hive defined tables - “No nodes available to run query”Performance tuning for Amazon EMR / Hive processing large number of files in S3How to avoid generating empty .deflate files for a Hive query?Hive Insert query on EMR just keeps running for more then 17 hoursWhen a hive query is executed,It can not be generated application on large size tableHive Merge Small ORC FilesHow do you add partitions to a partitioned table in Presto running in Amazon EMR?

'sudo apt-get update' get a warning

Non-OR journals which regularly publish OR research

Why does Intel's Haswell chip allow FP multiplication to be twice as fast as addition?

Blocking people from taking pictures of me with smartphone

How should an administrative assistant reply to student addressing them as "Professor" or "Doctor"?

Why couldn't soldiers sight their own weapons without officers' orders?

Ordering a word list

Strangeness with gears

First amendment and employment: Can an employer terminate you for speech?

During the Space Shuttle Columbia Disaster of 2003, Why Did The Flight Director Say, "Lock the doors."?

Why doesn't the "actual" path matter for line integrals?

Does two puncture wounds mean venomous snake?

If a Contingency spell has been cast on a creature, does the Simulacrum spell transfer the contingent spell to its duplicate?

Is it incorrect to write "I rate this book a 3 out of 4 stars?"

Why did Gandalf use a sword against the Balrog?

Was the 2019 Lion King film made through motion capture?

Dereferencing a pointer in a 'for' loop initializer creates a segmentation fault

Double blind peer review when paper cites author's GitHub repo for code

What word can be used to describe a bug in a movie?

Do other countries guarantee freedoms that the United States does not have?

Plausibility of Ice Eaters in the Arctic

English - Acceptable use of parentheses in an author's name

Team goes to lunch frequently, I do intermittent fasting but still want to socialize

Unique combinations of a list of tuples



Specify minimum number of generated files from Hive insert


More efficient query to avoid OutOfMemoryError in HiveSorted Table in Hive (ORC file format)Issue in Hive Query due to memoryAWS EMR Auto ScalingHive dynamic partitions generate multiple filesNumber Of Mappers Spawned In Pig And HivePresto cluster cannot run queries against hive defined tables - “No nodes available to run query”Performance tuning for Amazon EMR / Hive processing large number of files in S3How to avoid generating empty .deflate files for a Hive query?Hive Insert query on EMR just keeps running for more then 17 hoursWhen a hive query is executed,It can not be generated application on large size tableHive Merge Small ORC FilesHow do you add partitions to a partitioned table in Presto running in Amazon EMR?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2















I am using Hive on AWS EMR to insert the results of a query into a Hive table partitioned by date. Although the total output size each day is similar, the number of generated files varies, usually between 6 to 8, but some days it creates just a single big file. I reran the query a couple of times, just in case the number of files happens to be influenced by the availability of nodes in the cluster but it seems it's consistent.



So my questions are
(a) what determines how many files are generated and
(b) is there a way to specify the minimum number of files or (even better) the maximum size of each file?










share|improve this question
































    2















    I am using Hive on AWS EMR to insert the results of a query into a Hive table partitioned by date. Although the total output size each day is similar, the number of generated files varies, usually between 6 to 8, but some days it creates just a single big file. I reran the query a couple of times, just in case the number of files happens to be influenced by the availability of nodes in the cluster but it seems it's consistent.



    So my questions are
    (a) what determines how many files are generated and
    (b) is there a way to specify the minimum number of files or (even better) the maximum size of each file?










    share|improve this question




























      2












      2








      2








      I am using Hive on AWS EMR to insert the results of a query into a Hive table partitioned by date. Although the total output size each day is similar, the number of generated files varies, usually between 6 to 8, but some days it creates just a single big file. I reran the query a couple of times, just in case the number of files happens to be influenced by the availability of nodes in the cluster but it seems it's consistent.



      So my questions are
      (a) what determines how many files are generated and
      (b) is there a way to specify the minimum number of files or (even better) the maximum size of each file?










      share|improve this question
















      I am using Hive on AWS EMR to insert the results of a query into a Hive table partitioned by date. Although the total output size each day is similar, the number of generated files varies, usually between 6 to 8, but some days it creates just a single big file. I reran the query a couple of times, just in case the number of files happens to be influenced by the availability of nodes in the cluster but it seems it's consistent.



      So my questions are
      (a) what determines how many files are generated and
      (b) is there a way to specify the minimum number of files or (even better) the maximum size of each file?







      hive amazon-emr hadoop-partitioning






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 27 at 10:51









      leftjoin

      14.4k3 gold badges27 silver badges61 bronze badges




      14.4k3 gold badges27 silver badges61 bronze badges










      asked Mar 27 at 7:40









      gsakkisgsakkis

      7428 silver badges20 bronze badges




      7428 silver badges20 bronze badges

























          1 Answer
          1






          active

          oldest

          votes


















          2














          The number of files generated during INSERT ... SELECT depends on the number of processes running on final reducer (final reducer vertex if you are running on Tez) plus bytes per reducer configured.



          If the table is partitioned and there is no DISTRIBUTE BY specified, then in the worst case each reducer creates files in each partition. This creates high pressure on reducers and may cause OOM exception.



          To make sure reducers are writing only one partition files each, add DISTRIBUTE BY partition_column at the end of your query.



          If the data volume is too big, and you want more reducers to increase parallelism and to create more files per partition, add random number to the distribute by, for example using this: FLOOR(RAND()*100.0)%10 - it will distribute data additionally by random 10 buckets, so in each partition will be 10 files.



          Finally your INSERT sentence will look like:



          INSERT OVERWRITE table PARTITION(part_col)
          SELECT *
          FROM src
          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10; --10 files per partition


          Also this configuration setting affects the number of files generated:



          set hive.exec.reducers.bytes.per.reducer=67108864; 


          If you have too much data, Hive will start more reducers to process no more than bytes per reducer specified on each reducer process. The more reducers - the more files will be generated. Decreasing this setting may cause increasing the number of reducers running and they will create minimum one file per reducer. If partition column is not in the distribute by then each reducer may create files in each partition.



          To make long story short, use



          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10 -- 10 files per partition


          If you want 20 files per partition, use FLOOR(RAND()*100.0)%20; - this will guarantee minimum 20 files per partition if you have enough data, but will not guarantee the maximum size of each file.



          Bytes per reducer setting does not guarantee that it will be the fixed minimum number of files. The number of files will depend of total data size/bytes.per.reducer. This setting will guarantee the maximum size of each file.



          You can use both methods combined: bytes per reducer limit + distribute by to control both the minimum number of files and maximum file size.



          Also read this answer about using distribute by to distribute data evenly between reducers: https://stackoverflow.com/a/38475807/2700344






          share|improve this answer



























          • Thanks, good to know about DISTRIBUTE BY though I ended up using CLUSTER BY in the table definition. Any thoughts on pros/cons of each approach?

            – gsakkis
            Mar 27 at 12:01






          • 2





            @gsakkis Using DISTRIBUTE BY RAND in the query allows more flexibility and completely solves SKEW issue. If table is bucketed and data is skewed then bucketing will not help much. You can easily change distribute by in the query. Bucketing requires sorting, which you also can add to your query and basically achieve the same: configured number of sorted files. Using hive.enforce.bucketing = true will automatically start the number of reducers = number of buckets. bytes per reducer + distribute is more flexible, because it will start the number of reducers depending on data size

            – leftjoin
            Mar 27 at 12:31










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372028%2fspecify-minimum-number-of-generated-files-from-hive-insert%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2














          The number of files generated during INSERT ... SELECT depends on the number of processes running on final reducer (final reducer vertex if you are running on Tez) plus bytes per reducer configured.



          If the table is partitioned and there is no DISTRIBUTE BY specified, then in the worst case each reducer creates files in each partition. This creates high pressure on reducers and may cause OOM exception.



          To make sure reducers are writing only one partition files each, add DISTRIBUTE BY partition_column at the end of your query.



          If the data volume is too big, and you want more reducers to increase parallelism and to create more files per partition, add random number to the distribute by, for example using this: FLOOR(RAND()*100.0)%10 - it will distribute data additionally by random 10 buckets, so in each partition will be 10 files.



          Finally your INSERT sentence will look like:



          INSERT OVERWRITE table PARTITION(part_col)
          SELECT *
          FROM src
          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10; --10 files per partition


          Also this configuration setting affects the number of files generated:



          set hive.exec.reducers.bytes.per.reducer=67108864; 


          If you have too much data, Hive will start more reducers to process no more than bytes per reducer specified on each reducer process. The more reducers - the more files will be generated. Decreasing this setting may cause increasing the number of reducers running and they will create minimum one file per reducer. If partition column is not in the distribute by then each reducer may create files in each partition.



          To make long story short, use



          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10 -- 10 files per partition


          If you want 20 files per partition, use FLOOR(RAND()*100.0)%20; - this will guarantee minimum 20 files per partition if you have enough data, but will not guarantee the maximum size of each file.



          Bytes per reducer setting does not guarantee that it will be the fixed minimum number of files. The number of files will depend of total data size/bytes.per.reducer. This setting will guarantee the maximum size of each file.



          You can use both methods combined: bytes per reducer limit + distribute by to control both the minimum number of files and maximum file size.



          Also read this answer about using distribute by to distribute data evenly between reducers: https://stackoverflow.com/a/38475807/2700344






          share|improve this answer



























          • Thanks, good to know about DISTRIBUTE BY though I ended up using CLUSTER BY in the table definition. Any thoughts on pros/cons of each approach?

            – gsakkis
            Mar 27 at 12:01






          • 2





            @gsakkis Using DISTRIBUTE BY RAND in the query allows more flexibility and completely solves SKEW issue. If table is bucketed and data is skewed then bucketing will not help much. You can easily change distribute by in the query. Bucketing requires sorting, which you also can add to your query and basically achieve the same: configured number of sorted files. Using hive.enforce.bucketing = true will automatically start the number of reducers = number of buckets. bytes per reducer + distribute is more flexible, because it will start the number of reducers depending on data size

            – leftjoin
            Mar 27 at 12:31















          2














          The number of files generated during INSERT ... SELECT depends on the number of processes running on final reducer (final reducer vertex if you are running on Tez) plus bytes per reducer configured.



          If the table is partitioned and there is no DISTRIBUTE BY specified, then in the worst case each reducer creates files in each partition. This creates high pressure on reducers and may cause OOM exception.



          To make sure reducers are writing only one partition files each, add DISTRIBUTE BY partition_column at the end of your query.



          If the data volume is too big, and you want more reducers to increase parallelism and to create more files per partition, add random number to the distribute by, for example using this: FLOOR(RAND()*100.0)%10 - it will distribute data additionally by random 10 buckets, so in each partition will be 10 files.



          Finally your INSERT sentence will look like:



          INSERT OVERWRITE table PARTITION(part_col)
          SELECT *
          FROM src
          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10; --10 files per partition


          Also this configuration setting affects the number of files generated:



          set hive.exec.reducers.bytes.per.reducer=67108864; 


          If you have too much data, Hive will start more reducers to process no more than bytes per reducer specified on each reducer process. The more reducers - the more files will be generated. Decreasing this setting may cause increasing the number of reducers running and they will create minimum one file per reducer. If partition column is not in the distribute by then each reducer may create files in each partition.



          To make long story short, use



          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10 -- 10 files per partition


          If you want 20 files per partition, use FLOOR(RAND()*100.0)%20; - this will guarantee minimum 20 files per partition if you have enough data, but will not guarantee the maximum size of each file.



          Bytes per reducer setting does not guarantee that it will be the fixed minimum number of files. The number of files will depend of total data size/bytes.per.reducer. This setting will guarantee the maximum size of each file.



          You can use both methods combined: bytes per reducer limit + distribute by to control both the minimum number of files and maximum file size.



          Also read this answer about using distribute by to distribute data evenly between reducers: https://stackoverflow.com/a/38475807/2700344






          share|improve this answer



























          • Thanks, good to know about DISTRIBUTE BY though I ended up using CLUSTER BY in the table definition. Any thoughts on pros/cons of each approach?

            – gsakkis
            Mar 27 at 12:01






          • 2





            @gsakkis Using DISTRIBUTE BY RAND in the query allows more flexibility and completely solves SKEW issue. If table is bucketed and data is skewed then bucketing will not help much. You can easily change distribute by in the query. Bucketing requires sorting, which you also can add to your query and basically achieve the same: configured number of sorted files. Using hive.enforce.bucketing = true will automatically start the number of reducers = number of buckets. bytes per reducer + distribute is more flexible, because it will start the number of reducers depending on data size

            – leftjoin
            Mar 27 at 12:31













          2












          2








          2







          The number of files generated during INSERT ... SELECT depends on the number of processes running on final reducer (final reducer vertex if you are running on Tez) plus bytes per reducer configured.



          If the table is partitioned and there is no DISTRIBUTE BY specified, then in the worst case each reducer creates files in each partition. This creates high pressure on reducers and may cause OOM exception.



          To make sure reducers are writing only one partition files each, add DISTRIBUTE BY partition_column at the end of your query.



          If the data volume is too big, and you want more reducers to increase parallelism and to create more files per partition, add random number to the distribute by, for example using this: FLOOR(RAND()*100.0)%10 - it will distribute data additionally by random 10 buckets, so in each partition will be 10 files.



          Finally your INSERT sentence will look like:



          INSERT OVERWRITE table PARTITION(part_col)
          SELECT *
          FROM src
          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10; --10 files per partition


          Also this configuration setting affects the number of files generated:



          set hive.exec.reducers.bytes.per.reducer=67108864; 


          If you have too much data, Hive will start more reducers to process no more than bytes per reducer specified on each reducer process. The more reducers - the more files will be generated. Decreasing this setting may cause increasing the number of reducers running and they will create minimum one file per reducer. If partition column is not in the distribute by then each reducer may create files in each partition.



          To make long story short, use



          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10 -- 10 files per partition


          If you want 20 files per partition, use FLOOR(RAND()*100.0)%20; - this will guarantee minimum 20 files per partition if you have enough data, but will not guarantee the maximum size of each file.



          Bytes per reducer setting does not guarantee that it will be the fixed minimum number of files. The number of files will depend of total data size/bytes.per.reducer. This setting will guarantee the maximum size of each file.



          You can use both methods combined: bytes per reducer limit + distribute by to control both the minimum number of files and maximum file size.



          Also read this answer about using distribute by to distribute data evenly between reducers: https://stackoverflow.com/a/38475807/2700344






          share|improve this answer















          The number of files generated during INSERT ... SELECT depends on the number of processes running on final reducer (final reducer vertex if you are running on Tez) plus bytes per reducer configured.



          If the table is partitioned and there is no DISTRIBUTE BY specified, then in the worst case each reducer creates files in each partition. This creates high pressure on reducers and may cause OOM exception.



          To make sure reducers are writing only one partition files each, add DISTRIBUTE BY partition_column at the end of your query.



          If the data volume is too big, and you want more reducers to increase parallelism and to create more files per partition, add random number to the distribute by, for example using this: FLOOR(RAND()*100.0)%10 - it will distribute data additionally by random 10 buckets, so in each partition will be 10 files.



          Finally your INSERT sentence will look like:



          INSERT OVERWRITE table PARTITION(part_col)
          SELECT *
          FROM src
          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10; --10 files per partition


          Also this configuration setting affects the number of files generated:



          set hive.exec.reducers.bytes.per.reducer=67108864; 


          If you have too much data, Hive will start more reducers to process no more than bytes per reducer specified on each reducer process. The more reducers - the more files will be generated. Decreasing this setting may cause increasing the number of reducers running and they will create minimum one file per reducer. If partition column is not in the distribute by then each reducer may create files in each partition.



          To make long story short, use



          DISTRIBUTE BY part_col, FLOOR(RAND()*100.0)%10 -- 10 files per partition


          If you want 20 files per partition, use FLOOR(RAND()*100.0)%20; - this will guarantee minimum 20 files per partition if you have enough data, but will not guarantee the maximum size of each file.



          Bytes per reducer setting does not guarantee that it will be the fixed minimum number of files. The number of files will depend of total data size/bytes.per.reducer. This setting will guarantee the maximum size of each file.



          You can use both methods combined: bytes per reducer limit + distribute by to control both the minimum number of files and maximum file size.



          Also read this answer about using distribute by to distribute data evenly between reducers: https://stackoverflow.com/a/38475807/2700344







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 27 at 11:42

























          answered Mar 27 at 10:42









          leftjoinleftjoin

          14.4k3 gold badges27 silver badges61 bronze badges




          14.4k3 gold badges27 silver badges61 bronze badges















          • Thanks, good to know about DISTRIBUTE BY though I ended up using CLUSTER BY in the table definition. Any thoughts on pros/cons of each approach?

            – gsakkis
            Mar 27 at 12:01






          • 2





            @gsakkis Using DISTRIBUTE BY RAND in the query allows more flexibility and completely solves SKEW issue. If table is bucketed and data is skewed then bucketing will not help much. You can easily change distribute by in the query. Bucketing requires sorting, which you also can add to your query and basically achieve the same: configured number of sorted files. Using hive.enforce.bucketing = true will automatically start the number of reducers = number of buckets. bytes per reducer + distribute is more flexible, because it will start the number of reducers depending on data size

            – leftjoin
            Mar 27 at 12:31

















          • Thanks, good to know about DISTRIBUTE BY though I ended up using CLUSTER BY in the table definition. Any thoughts on pros/cons of each approach?

            – gsakkis
            Mar 27 at 12:01






          • 2





            @gsakkis Using DISTRIBUTE BY RAND in the query allows more flexibility and completely solves SKEW issue. If table is bucketed and data is skewed then bucketing will not help much. You can easily change distribute by in the query. Bucketing requires sorting, which you also can add to your query and basically achieve the same: configured number of sorted files. Using hive.enforce.bucketing = true will automatically start the number of reducers = number of buckets. bytes per reducer + distribute is more flexible, because it will start the number of reducers depending on data size

            – leftjoin
            Mar 27 at 12:31
















          Thanks, good to know about DISTRIBUTE BY though I ended up using CLUSTER BY in the table definition. Any thoughts on pros/cons of each approach?

          – gsakkis
          Mar 27 at 12:01





          Thanks, good to know about DISTRIBUTE BY though I ended up using CLUSTER BY in the table definition. Any thoughts on pros/cons of each approach?

          – gsakkis
          Mar 27 at 12:01




          2




          2





          @gsakkis Using DISTRIBUTE BY RAND in the query allows more flexibility and completely solves SKEW issue. If table is bucketed and data is skewed then bucketing will not help much. You can easily change distribute by in the query. Bucketing requires sorting, which you also can add to your query and basically achieve the same: configured number of sorted files. Using hive.enforce.bucketing = true will automatically start the number of reducers = number of buckets. bytes per reducer + distribute is more flexible, because it will start the number of reducers depending on data size

          – leftjoin
          Mar 27 at 12:31





          @gsakkis Using DISTRIBUTE BY RAND in the query allows more flexibility and completely solves SKEW issue. If table is bucketed and data is skewed then bucketing will not help much. You can easily change distribute by in the query. Bucketing requires sorting, which you also can add to your query and basically achieve the same: configured number of sorted files. Using hive.enforce.bucketing = true will automatically start the number of reducers = number of buckets. bytes per reducer + distribute is more flexible, because it will start the number of reducers depending on data size

          – leftjoin
          Mar 27 at 12:31








          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372028%2fspecify-minimum-number-of-generated-files-from-hive-insert%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

          Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript