Efficient dask method for reading sql table with where clause for 5 million rowsDask read_sql_table errors out when using an SQLAlchemy expressionPandas to_sql() performance - why is it so slow?Slow Dask performance on CSV date parsing?Using dask to import many MAT files into one DataFrameApplying a function to two pandas DataFrames efficientlydask.multiprocessing or pandas + multiprocessing.pool: what's the difference?dask read_sql error when querying from MYSQLPython pd.read_sql where cause parametersSQL Optimization w/o Creating IndexPython Dask dataframe separation based on column valueConcatenate 300 row csv to get 600 million rows with python/dask

What is the difference between an astronaut in the ISS and a freediver in perfect neutral buoyancy?

Why does my browser attempt to download pages from http://clhs.lisp.se instead of viewing them normally?

Suffocation while cooking under an umbrella?

Why solving a differentiated integral equation might eventually lead to erroneous solutions of the original problem?

Best way to visualize huge amount of data

Why did the Soviet Union not "grant" Inner Mongolia to Mongolia after World War Two?

How to see the previous "Accessed" date in Windows

Is it impolite to ask for halal food when traveling to and in Thailand?

Is there something that can completely prevent the effects of the Hold Person spell?

Fuel sender works when outside of tank, but not when in tank

Is the use of language other than English 'Reasonable Suspicion' for detention?

Tesla coil and Tesla tower

How do you use the interjection for snorting?

How to say "cheat sheet" in French

Pi Zero Work With Embedded WIFI And Cellular USB Modem

What secular civic space would pioneers build for small frontier towns?

How can this Stack Exchange site have an animated favicon?

Labview vs Matlab??Which one better for image processing?

Could Apollo astronauts see city lights from the moon?

A file manager to open a zip file like opening a folder, instead of extract it by using a archive manager

Examples of "unsuccessful" theories with afterlives

Comma Code - Automate the Boring Stuff with Python

Excel Solver linear programming - Is it possible to use average of values as a constraint without #DIV/0! errors or sacrificing linearity?

How to justify a team increase when the team is doing good?



Efficient dask method for reading sql table with where clause for 5 million rows


Dask read_sql_table errors out when using an SQLAlchemy expressionPandas to_sql() performance - why is it so slow?Slow Dask performance on CSV date parsing?Using dask to import many MAT files into one DataFrameApplying a function to two pandas DataFrames efficientlydask.multiprocessing or pandas + multiprocessing.pool: what's the difference?dask read_sql error when querying from MYSQLPython pd.read_sql where cause parametersSQL Optimization w/o Creating IndexPython Dask dataframe separation based on column valueConcatenate 300 row csv to get 600 million rows with python/dask






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



I attempted to implement the solution suggested and the process still takes around 1 minute.



I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
dds = []
for chunk in generator:
dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
ddf = dd.from_delayed(dds)
CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
Wall time: 52.3 s


result = engine.execute(query)
df = pd.DataFrame(result.fetchall())
df.columns = result.keys()
ddf = dd.from_pandas(df,npartitions=10)
CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
Wall time: 2min 41s

ddf = dd.read_sql_table(table="4.5mil_table",
uri=uri, index_col='ID')
CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
Wall time: 16 s


I know there has to be a more efficient way to do this that I am missing.










share|improve this question






























    0















    I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



    I attempted to implement the solution suggested and the process still takes around 1 minute.



    I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



    Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



    generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
    dds = []
    for chunk in generator:
    dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
    ddf = dd.from_delayed(dds)
    CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
    Wall time: 52.3 s


    result = engine.execute(query)
    df = pd.DataFrame(result.fetchall())
    df.columns = result.keys()
    ddf = dd.from_pandas(df,npartitions=10)
    CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
    Wall time: 2min 41s

    ddf = dd.read_sql_table(table="4.5mil_table",
    uri=uri, index_col='ID')
    CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
    Wall time: 16 s


    I know there has to be a more efficient way to do this that I am missing.










    share|improve this question


























      0












      0








      0








      I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



      I attempted to implement the solution suggested and the process still takes around 1 minute.



      I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



      Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



      generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
      dds = []
      for chunk in generator:
      dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
      ddf = dd.from_delayed(dds)
      CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
      Wall time: 52.3 s


      result = engine.execute(query)
      df = pd.DataFrame(result.fetchall())
      df.columns = result.keys()
      ddf = dd.from_pandas(df,npartitions=10)
      CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
      Wall time: 2min 41s

      ddf = dd.read_sql_table(table="4.5mil_table",
      uri=uri, index_col='ID')
      CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
      Wall time: 16 s


      I know there has to be a more efficient way to do this that I am missing.










      share|improve this question














      I have a 55-million-row table in MSSQL and I only need 5 million of those rows to pull into a dask dataframe. Currently, it doesn't support sql queries but it does support sqlalchemy statements, but there's some issue with that as described here: Dask read_sql_table errors out when using an SQLAlchemy expression



      I attempted to implement the solution suggested and the process still takes around 1 minute.



      I also attempted to load the data into a pandas dataframe all at once then to dask, and that takes around 2 minutes.



      Loading in a ~4.5 million row table in the accepted dask way takes 15 seconds.



      generator = pd.read_sql(sql=query, con=uri,chunksize=50000)
      dds = []
      for chunk in generator:
      dds.append(dask.delayed(dd.from_pandas)(chunk, npartitions=5))
      ddf = dd.from_delayed(dds)
      CPU times: user 50.1 s, sys: 2.13 s, total: 52.2 s
      Wall time: 52.3 s


      result = engine.execute(query)
      df = pd.DataFrame(result.fetchall())
      df.columns = result.keys()
      ddf = dd.from_pandas(df,npartitions=10)
      CPU times: user 54.3 s, sys: 3.14 s, total: 57.4 s
      Wall time: 2min 41s

      ddf = dd.read_sql_table(table="4.5mil_table",
      uri=uri, index_col='ID')
      CPU times: user 117 ms, sys: 4.12 ms, total: 122 ms
      Wall time: 16 s


      I know there has to be a more efficient way to do this that I am missing.







      python sqlalchemy dask






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 28 at 17:35









      msolomon87msolomon87

      164 bronze badges




      164 bronze badges

























          0






          active

          oldest

          votes














          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );














          draft saved

          draft discarded
















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55403751%2fefficient-dask-method-for-reading-sql-table-with-where-clause-for-5-million-rows%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55403751%2fefficient-dask-method-for-reading-sql-table-with-where-clause-for-5-million-rows%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

          은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현