Efficient dask method for reading sql table with where clause for 5 million rowsDask read_sql_table errors out when using an SQLAlchemy expressionPandas to_sql() performance - why is it so slow?Slow Dask performance on CSV date parsing?Using dask to import many MAT files into one DataFrameApplying a function to two pandas DataFrames efficientlydask.multiprocessing or pandas + multiprocessing.pool: what's the difference?dask read_sql error when querying from MYSQLPython pd.read_sql where cause parametersSQL Optimization w/o Creating IndexPython Dask dataframe separation based on column valueConcatenate 300 row csv to get 600 million rows with python/dask

What is the difference between an astronaut in the ISS and a freediver in perfect neutral buoyancy?

Why does my browser attempt to download pages from http://clhs.lisp.se instead of viewing them normally?

Suffocation while cooking under an umbrella?

Why solving a differentiated integral equation might eventually lead to erroneous solutions of the original problem?

Best way to visualize huge amount of data

Why did the Soviet Union not "grant" Inner Mongolia to Mongolia after World War Two?

How to see the previous "Accessed" date in Windows

Is it impolite to ask for halal food when traveling to and in Thailand?

Is there something that can completely prevent the effects of the Hold Person spell?

Fuel sender works when outside of tank, but not when in tank

Is the use of language other than English 'Reasonable Suspicion' for detention?

Tesla coil and Tesla tower

How do you use the interjection for snorting?

How to say "cheat sheet" in French

Pi Zero Work With Embedded WIFI And Cellular USB Modem

What secular civic space would pioneers build for small frontier towns?

How can this Stack Exchange site have an animated favicon?

Labview vs Matlab??Which one better for image processing?

Could Apollo astronauts see city lights from the moon?

A file manager to open a zip file like opening a folder, instead of extract it by using a archive manager

Examples of "unsuccessful" theories with afterlives

Comma Code - Automate the Boring Stuff with Python

Excel Solver linear programming - Is it possible to use average of values as a constraint without #DIV/0! errors or sacrificing linearity?

How to justify a team increase when the team is doing good?



Efficient dask method for reading sql table with where clause for 5 million rows


Dask read_sql_table errors out when using an SQLAlchemy expressionPandas to_sql() performance - why is it so slow?Slow Dask performance on CSV date parsing?Using dask to import many MAT files into one DataFrameApplying a function to two pandas DataFrames efficientlydask.multiprocessing or pandas + multiprocessing.pool: what's the difference?dask read_sql error when querying from MYSQLPython pd.read_sql where cause parametersSQL Optimization w/o Creating IndexPython Dask dataframe separation based on column valueConcatenate 300 row csv to get 600 million rows with python/dask






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



I attempted to implement the solution suggested and the process still takes around 1 minute.



I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
dds = []
for chunk in generator:
dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
ddf = dd.from_delayed(dds)
CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
Wall time: 52.3 s


result = engine.execute(query)
df = pd.DataFrame(result.fetchall())
df.columns = result.keys()
ddf = dd.from_pandas(df,npartitions=10)
CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
Wall time: 2min 41s

ddf = dd.read_sql_table(table="4.5mil_table",
uri=uri, index_col='ID')
CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
Wall time: 16 s


I know there has to be a more efficient way to do this that I am missing.










share|improve this question






























    0















    I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



    I attempted to implement the solution suggested and the process still takes around 1 minute.



    I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



    Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



    generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
    dds = []
    for chunk in generator:
    dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
    ddf = dd.from_delayed(dds)
    CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
    Wall time: 52.3 s


    result = engine.execute(query)
    df = pd.DataFrame(result.fetchall())
    df.columns = result.keys()
    ddf = dd.from_pandas(df,npartitions=10)
    CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
    Wall time: 2min 41s

    ddf = dd.read_sql_table(table="4.5mil_table",
    uri=uri, index_col='ID')
    CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
    Wall time: 16 s


    I know there has to be a more efficient way to do this that I am missing.










    share|improve this question


























      0












      0








      0








      I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



      I attempted to implement the solution suggested and the process still takes around 1 minute.



      I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



      Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



      generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
      dds = []
      for chunk in generator:
      dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
      ddf = dd.from_delayed(dds)
      CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
      Wall time: 52.3 s


      result = engine.execute(query)
      df = pd.DataFrame(result.fetchall())
      df.columns = result.keys()
      ddf = dd.from_pandas(df,npartitions=10)
      CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
      Wall time: 2min 41s

      ddf = dd.read_sql_table(table="4.5mil_table",
      uri=uri, index_col='ID')
      CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
      Wall time: 16 s


      I know there has to be a more efficient way to do this that I am missing.










      share|improve this question














      I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



      I attempted to implement the solution suggested and the process still takes around 1 minute.



      I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



      Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



      generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
      dds = []
      for chunk in generator:
      dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
      ddf = dd.from_delayed(dds)
      CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
      Wall time: 52.3 s


      result = engine.execute(query)
      df = pd.DataFrame(result.fetchall())
      df.columns = result.keys()
      ddf = dd.from_pandas(df,npartitions=10)
      CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
      Wall time: 2min 41s

      ddf = dd.read_sql_table(table="4.5mil_table",
      uri=uri, index_col='ID')
      CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
      Wall time: 16 s


      I know there has to be a more efficient way to do this that I am missing.







      python sqlalchemy dask






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 28 at 17:35









      msolomon87msolomon87

      164 bronze badges




      164 bronze badges

























          0






          active

          oldest

          votes














          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );














          draft saved

          draft discarded
















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55403751%2fefficient-dask-method-for-reading-sql-table-with-where-clause-for-5-million-rows%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55403751%2fefficient-dask-method-for-reading-sql-table-with-where-clause-for-5-million-rows%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

          Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript