PySpark: SQLContext temp table is not returning any tablePySpark groupByKey returning pyspark.resultiterable.ResultIterableWindow function is not working on Pyspark sqlcontextOverwrite table in Spark SqlContextPySpark createExternalTable() from SQLContextPyspark sqlContext execute sub query error with LIMITPySpark sqlContext read Postgres 9.6 NullPointerExceptionPyspark 2.1.0 SQLcontext show() method prints weird None after tablePyspark not running the sqlContext in PycharmError in SqlContext import and parallelize Pysparkpyspark : Are multiple SQLContexts allowed?

When calculating averages, why can we treat exploding die as if they're independent?

More than three domains hosted on the same IP address

Is there a "right" way to interpret a novel, if not, how do we make sure our novel is interpreted correctly?

What makes an ending "happy"?

How would two worlds first establish an exchange rate between their currencies

Is it unavoidable taking shortcuts in software development sometimes?

When does order matter in probability?

Is future tense in English really a myth?

Is there a specific way to describe over-grown, old, tough vegetables?

Was Robin Hood's point of view ethically sound?

Is every sentence we write or utter either true or false?

Problem with listing a directory to grep

How can faith be maintained in a world of living gods?

When did computers stop checking memory on boot?

Why does PAUSE key have a long make code and no break code?

How can I return only the number of paired values in array?

Strategies for dealing with chess burnout?

How to say "In Japan, I want to ..."?

Poor management handling of recent sickness and how to approach my return?

How do we create our own symbolisms?

Why would an AC motor heavily shake when driven with certain frequencies?

Text is continuing wider than the width of the page and not passing to new line

Capacitors with same voltage, same capacitance, same temp, different diameter?

Are personality traits, ideals, bonds, and flaws required?



PySpark: SQLContext temp table is not returning any table


PySpark groupByKey returning pyspark.resultiterable.ResultIterableWindow function is not working on Pyspark sqlcontextOverwrite table in Spark SqlContextPySpark createExternalTable() from SQLContextPyspark sqlContext execute sub query error with LIMITPySpark sqlContext read Postgres 9.6 NullPointerExceptionPyspark 2.1.0 SQLcontext show() method prints weird None after tablePyspark not running the sqlContext in PycharmError in SqlContext import and parallelize Pysparkpyspark : Are multiple SQLContexts allowed?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















I am quite new to PySpark. Therefore this question may appear as quite elementary to others.
I am trying to export a data frame created via createOrReplaceTempView() to Hive. The steps are as follows



sqlcntx = SQLContext(sc)
df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
df_cv_temp = df.createOrReplaceTempView("df")


When I use df_cv_temp.show(5) it is giving an error as follows



NoneType Object has no attribute 'show'


Interestingly when I try to see df.show(5) I am getting proper output.
Naturally when I see the above error I am not able to proceed further.



Now I have two questions.



  1. How to fix the above issue?

  2. Assuming the 1st issue is taken care of, what is the best way to export df_cv_temp to HIVE tables?

P.S. I am using PySaprk 2.0



Update: Incorporating Jim's Answer



Post answer received from Jim, I have updated the code. Please see below the revised code.



from pyspark.sql import HiveContext,SQLContext
sql_cntx = SQLContext(sc)
df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
df_curr_volt.createOrReplaceTempView("df_cv_temp")
df_cv_filt = sql_cntx.sql("select * from df_cv_temp where DeviceTimeStamp between date_add(current_date(),-1) and current_date()") # Retrieving just a day's record
hc = HiveContext(sc)


Now the problem begins. Please refer to my question 2.



df_cv_tbl = hc.sql("create table if not exits df_cv_raw as select * from df_cv_filt")
df_cv_tbl.write.format("orc").saveAsTable("df_cv_raw")


The above two lines is producing the error as shown below.



pyspark.sql.utils.AnalysisException: u'Table or view not found: df_cv_filt; line 1 pos 14'



So what is the right way of approaching this?










share|improve this question
































    0















    I am quite new to PySpark. Therefore this question may appear as quite elementary to others.
    I am trying to export a data frame created via createOrReplaceTempView() to Hive. The steps are as follows



    sqlcntx = SQLContext(sc)
    df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
    df_cv_temp = df.createOrReplaceTempView("df")


    When I use df_cv_temp.show(5) it is giving an error as follows



    NoneType Object has no attribute 'show'


    Interestingly when I try to see df.show(5) I am getting proper output.
    Naturally when I see the above error I am not able to proceed further.



    Now I have two questions.



    1. How to fix the above issue?

    2. Assuming the 1st issue is taken care of, what is the best way to export df_cv_temp to HIVE tables?

    P.S. I am using PySaprk 2.0



    Update: Incorporating Jim's Answer



    Post answer received from Jim, I have updated the code. Please see below the revised code.



    from pyspark.sql import HiveContext,SQLContext
    sql_cntx = SQLContext(sc)
    df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
    df_curr_volt.createOrReplaceTempView("df_cv_temp")
    df_cv_filt = sql_cntx.sql("select * from df_cv_temp where DeviceTimeStamp between date_add(current_date(),-1) and current_date()") # Retrieving just a day's record
    hc = HiveContext(sc)


    Now the problem begins. Please refer to my question 2.



    df_cv_tbl = hc.sql("create table if not exits df_cv_raw as select * from df_cv_filt")
    df_cv_tbl.write.format("orc").saveAsTable("df_cv_raw")


    The above two lines is producing the error as shown below.



    pyspark.sql.utils.AnalysisException: u'Table or view not found: df_cv_filt; line 1 pos 14'



    So what is the right way of approaching this?










    share|improve this question




























      0












      0








      0








      I am quite new to PySpark. Therefore this question may appear as quite elementary to others.
      I am trying to export a data frame created via createOrReplaceTempView() to Hive. The steps are as follows



      sqlcntx = SQLContext(sc)
      df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
      df_cv_temp = df.createOrReplaceTempView("df")


      When I use df_cv_temp.show(5) it is giving an error as follows



      NoneType Object has no attribute 'show'


      Interestingly when I try to see df.show(5) I am getting proper output.
      Naturally when I see the above error I am not able to proceed further.



      Now I have two questions.



      1. How to fix the above issue?

      2. Assuming the 1st issue is taken care of, what is the best way to export df_cv_temp to HIVE tables?

      P.S. I am using PySaprk 2.0



      Update: Incorporating Jim's Answer



      Post answer received from Jim, I have updated the code. Please see below the revised code.



      from pyspark.sql import HiveContext,SQLContext
      sql_cntx = SQLContext(sc)
      df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
      df_curr_volt.createOrReplaceTempView("df_cv_temp")
      df_cv_filt = sql_cntx.sql("select * from df_cv_temp where DeviceTimeStamp between date_add(current_date(),-1) and current_date()") # Retrieving just a day's record
      hc = HiveContext(sc)


      Now the problem begins. Please refer to my question 2.



      df_cv_tbl = hc.sql("create table if not exits df_cv_raw as select * from df_cv_filt")
      df_cv_tbl.write.format("orc").saveAsTable("df_cv_raw")


      The above two lines is producing the error as shown below.



      pyspark.sql.utils.AnalysisException: u'Table or view not found: df_cv_filt; line 1 pos 14'



      So what is the right way of approaching this?










      share|improve this question
















      I am quite new to PySpark. Therefore this question may appear as quite elementary to others.
      I am trying to export a data frame created via createOrReplaceTempView() to Hive. The steps are as follows



      sqlcntx = SQLContext(sc)
      df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
      df_cv_temp = df.createOrReplaceTempView("df")


      When I use df_cv_temp.show(5) it is giving an error as follows



      NoneType Object has no attribute 'show'


      Interestingly when I try to see df.show(5) I am getting proper output.
      Naturally when I see the above error I am not able to proceed further.



      Now I have two questions.



      1. How to fix the above issue?

      2. Assuming the 1st issue is taken care of, what is the best way to export df_cv_temp to HIVE tables?

      P.S. I am using PySaprk 2.0



      Update: Incorporating Jim's Answer



      Post answer received from Jim, I have updated the code. Please see below the revised code.



      from pyspark.sql import HiveContext,SQLContext
      sql_cntx = SQLContext(sc)
      df = sqlcntx.read.format("jdbc").options(url="sqlserver://.....details of MS Sql server",dbtable = "table_name").load()
      df_curr_volt.createOrReplaceTempView("df_cv_temp")
      df_cv_filt = sql_cntx.sql("select * from df_cv_temp where DeviceTimeStamp between date_add(current_date(),-1) and current_date()") # Retrieving just a day's record
      hc = HiveContext(sc)


      Now the problem begins. Please refer to my question 2.



      df_cv_tbl = hc.sql("create table if not exits df_cv_raw as select * from df_cv_filt")
      df_cv_tbl.write.format("orc").saveAsTable("df_cv_raw")


      The above two lines is producing the error as shown below.



      pyspark.sql.utils.AnalysisException: u'Table or view not found: df_cv_filt; line 1 pos 14'



      So what is the right way of approaching this?







      pyspark






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 28 at 12:53







      pythondumb

















      asked Mar 28 at 7:28









      pythondumbpythondumb

      1881 silver badge11 bronze badges




      1881 silver badge11 bronze badges

























          1 Answer
          1






          active

          oldest

          votes


















          0
















          Instead of



          df_cv_temp = df.createOrReplaceTempView("df") 


          you have to use,



          df.createOrReplaceTempView("table1")


          This is because, df.createOrReplaceTempView(<name_of_the_view>) creates (or replaces if that view name already exists) a lazily evaluated "view" that you can then use like a hive table in Spark SQL. The expression does not produce any output as such, hence it is a NoneType object.



          Further, the temp view can be queried as below:



          spark.sql("SELECT field1 AS f1, field2 as f2 from table1").show()


          Incase, you are sure to have memory space, then you can persist it to be a hive table directly like below. This will create a managed Hive table physically; upon which you can query it even in your Hive CLI.



          df.write.saveAsTable("table1")





          share|improve this answer



























          • So when you say 'a lazily evaluated "view" that you can then use like a hive table in Spark SQL' does it mean I can't use the following query for creating another table name 'tbl'? hc = HiveContext(hc) ; tbl = hc.sql('CREATE TABLE df_final AS SELECT * FROM table1")

            – pythondumb
            Mar 28 at 11:21












          • Yes, you can use as you have mentioned above. Then, you can print as tbl.show(). That will work. Upvote if it worked.

            – Jim Todd
            Mar 28 at 12:38











          • Unfortunately it is not working. I have edited the question post your recommendation.

            – pythondumb
            Mar 28 at 12:39











          • 'tbl' as per your case is now a dataframe. The result set of the sql will be stored to tbl as dataframe. Whats the error you get now?

            – Jim Todd
            Mar 28 at 12:42











          • I edited the answer, to create a actual Hive table from the df. Check if that works.

            – Jim Todd
            Mar 28 at 12:51










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );














          draft saved

          draft discarded
















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55392200%2fpyspark-sqlcontext-temp-table-is-not-returning-any-table%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0
















          Instead of



          df_cv_temp = df.createOrReplaceTempView("df") 


          you have to use,



          df.createOrReplaceTempView("table1")


          This is because, df.createOrReplaceTempView(<name_of_the_view>) creates (or replaces if that view name already exists) a lazily evaluated "view" that you can then use like a hive table in Spark SQL. The expression does not produce any output as such, hence it is a NoneType object.



          Further, the temp view can be queried as below:



          spark.sql("SELECT field1 AS f1, field2 as f2 from table1").show()


          Incase, you are sure to have memory space, then you can persist it to be a hive table directly like below. This will create a managed Hive table physically; upon which you can query it even in your Hive CLI.



          df.write.saveAsTable("table1")





          share|improve this answer



























          • So when you say 'a lazily evaluated "view" that you can then use like a hive table in Spark SQL' does it mean I can't use the following query for creating another table name 'tbl'? hc = HiveContext(hc) ; tbl = hc.sql('CREATE TABLE df_final AS SELECT * FROM table1")

            – pythondumb
            Mar 28 at 11:21












          • Yes, you can use as you have mentioned above. Then, you can print as tbl.show(). That will work. Upvote if it worked.

            – Jim Todd
            Mar 28 at 12:38











          • Unfortunately it is not working. I have edited the question post your recommendation.

            – pythondumb
            Mar 28 at 12:39











          • 'tbl' as per your case is now a dataframe. The result set of the sql will be stored to tbl as dataframe. Whats the error you get now?

            – Jim Todd
            Mar 28 at 12:42











          • I edited the answer, to create a actual Hive table from the df. Check if that works.

            – Jim Todd
            Mar 28 at 12:51















          0
















          Instead of



          df_cv_temp = df.createOrReplaceTempView("df") 


          you have to use,



          df.createOrReplaceTempView("table1")


          This is because, df.createOrReplaceTempView(<name_of_the_view>) creates (or replaces if that view name already exists) a lazily evaluated "view" that you can then use like a hive table in Spark SQL. The expression does not produce any output as such, hence it is a NoneType object.



          Further, the temp view can be queried as below:



          spark.sql("SELECT field1 AS f1, field2 as f2 from table1").show()


          Incase, you are sure to have memory space, then you can persist it to be a hive table directly like below. This will create a managed Hive table physically; upon which you can query it even in your Hive CLI.



          df.write.saveAsTable("table1")





          share|improve this answer



























          • So when you say 'a lazily evaluated "view" that you can then use like a hive table in Spark SQL' does it mean I can't use the following query for creating another table name 'tbl'? hc = HiveContext(hc) ; tbl = hc.sql('CREATE TABLE df_final AS SELECT * FROM table1")

            – pythondumb
            Mar 28 at 11:21












          • Yes, you can use as you have mentioned above. Then, you can print as tbl.show(). That will work. Upvote if it worked.

            – Jim Todd
            Mar 28 at 12:38











          • Unfortunately it is not working. I have edited the question post your recommendation.

            – pythondumb
            Mar 28 at 12:39











          • 'tbl' as per your case is now a dataframe. The result set of the sql will be stored to tbl as dataframe. Whats the error you get now?

            – Jim Todd
            Mar 28 at 12:42











          • I edited the answer, to create a actual Hive table from the df. Check if that works.

            – Jim Todd
            Mar 28 at 12:51













          0














          0










          0









          Instead of



          df_cv_temp = df.createOrReplaceTempView("df") 


          you have to use,



          df.createOrReplaceTempView("table1")


          This is because, df.createOrReplaceTempView(<name_of_the_view>) creates (or replaces if that view name already exists) a lazily evaluated "view" that you can then use like a hive table in Spark SQL. The expression does not produce any output as such, hence it is a NoneType object.



          Further, the temp view can be queried as below:



          spark.sql("SELECT field1 AS f1, field2 as f2 from table1").show()


          Incase, you are sure to have memory space, then you can persist it to be a hive table directly like below. This will create a managed Hive table physically; upon which you can query it even in your Hive CLI.



          df.write.saveAsTable("table1")





          share|improve this answer















          Instead of



          df_cv_temp = df.createOrReplaceTempView("df") 


          you have to use,



          df.createOrReplaceTempView("table1")


          This is because, df.createOrReplaceTempView(<name_of_the_view>) creates (or replaces if that view name already exists) a lazily evaluated "view" that you can then use like a hive table in Spark SQL. The expression does not produce any output as such, hence it is a NoneType object.



          Further, the temp view can be queried as below:



          spark.sql("SELECT field1 AS f1, field2 as f2 from table1").show()


          Incase, you are sure to have memory space, then you can persist it to be a hive table directly like below. This will create a managed Hive table physically; upon which you can query it even in your Hive CLI.



          df.write.saveAsTable("table1")






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 28 at 12:50

























          answered Mar 28 at 8:21









          Jim ToddJim Todd

          1,0391 gold badge6 silver badges11 bronze badges




          1,0391 gold badge6 silver badges11 bronze badges















          • So when you say 'a lazily evaluated "view" that you can then use like a hive table in Spark SQL' does it mean I can't use the following query for creating another table name 'tbl'? hc = HiveContext(hc) ; tbl = hc.sql('CREATE TABLE df_final AS SELECT * FROM table1")

            – pythondumb
            Mar 28 at 11:21












          • Yes, you can use as you have mentioned above. Then, you can print as tbl.show(). That will work. Upvote if it worked.

            – Jim Todd
            Mar 28 at 12:38











          • Unfortunately it is not working. I have edited the question post your recommendation.

            – pythondumb
            Mar 28 at 12:39











          • 'tbl' as per your case is now a dataframe. The result set of the sql will be stored to tbl as dataframe. Whats the error you get now?

            – Jim Todd
            Mar 28 at 12:42











          • I edited the answer, to create a actual Hive table from the df. Check if that works.

            – Jim Todd
            Mar 28 at 12:51

















          • So when you say 'a lazily evaluated "view" that you can then use like a hive table in Spark SQL' does it mean I can't use the following query for creating another table name 'tbl'? hc = HiveContext(hc) ; tbl = hc.sql('CREATE TABLE df_final AS SELECT * FROM table1")

            – pythondumb
            Mar 28 at 11:21












          • Yes, you can use as you have mentioned above. Then, you can print as tbl.show(). That will work. Upvote if it worked.

            – Jim Todd
            Mar 28 at 12:38











          • Unfortunately it is not working. I have edited the question post your recommendation.

            – pythondumb
            Mar 28 at 12:39











          • 'tbl' as per your case is now a dataframe. The result set of the sql will be stored to tbl as dataframe. Whats the error you get now?

            – Jim Todd
            Mar 28 at 12:42











          • I edited the answer, to create a actual Hive table from the df. Check if that works.

            – Jim Todd
            Mar 28 at 12:51
















          So when you say 'a lazily evaluated "view" that you can then use like a hive table in Spark SQL' does it mean I can't use the following query for creating another table name 'tbl'? hc = HiveContext(hc) ; tbl = hc.sql('CREATE TABLE df_final AS SELECT * FROM table1")

          – pythondumb
          Mar 28 at 11:21






          So when you say 'a lazily evaluated "view" that you can then use like a hive table in Spark SQL' does it mean I can't use the following query for creating another table name 'tbl'? hc = HiveContext(hc) ; tbl = hc.sql('CREATE TABLE df_final AS SELECT * FROM table1")

          – pythondumb
          Mar 28 at 11:21














          Yes, you can use as you have mentioned above. Then, you can print as tbl.show(). That will work. Upvote if it worked.

          – Jim Todd
          Mar 28 at 12:38





          Yes, you can use as you have mentioned above. Then, you can print as tbl.show(). That will work. Upvote if it worked.

          – Jim Todd
          Mar 28 at 12:38













          Unfortunately it is not working. I have edited the question post your recommendation.

          – pythondumb
          Mar 28 at 12:39





          Unfortunately it is not working. I have edited the question post your recommendation.

          – pythondumb
          Mar 28 at 12:39













          'tbl' as per your case is now a dataframe. The result set of the sql will be stored to tbl as dataframe. Whats the error you get now?

          – Jim Todd
          Mar 28 at 12:42





          'tbl' as per your case is now a dataframe. The result set of the sql will be stored to tbl as dataframe. Whats the error you get now?

          – Jim Todd
          Mar 28 at 12:42













          I edited the answer, to create a actual Hive table from the df. Check if that works.

          – Jim Todd
          Mar 28 at 12:51





          I edited the answer, to create a actual Hive table from the df. Check if that works.

          – Jim Todd
          Mar 28 at 12:51






          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.







          Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.




















          draft saved

          draft discarded















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55392200%2fpyspark-sqlcontext-temp-table-is-not-returning-any-table%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

          Swift 4 - func physicsWorld not invoked on collision? The Next CEO of Stack OverflowHow to call Objective-C code from Swift#ifdef replacement in the Swift language@selector() in Swift?#pragma mark in Swift?Swift for loop: for index, element in array?dispatch_after - GCD in Swift?Swift Beta performance: sorting arraysSplit a String into an array in Swift?The use of Swift 3 @objc inference in Swift 4 mode is deprecated?How to optimize UITableViewCell, because my UITableView lags

          Access current req object everywhere in Node.js ExpressWhy are global variables considered bad practice? (node.js)Using req & res across functionsHow do I get the path to the current script with Node.js?What is Node.js' Connect, Express and “middleware”?Node.js w/ express error handling in callbackHow to access the GET parameters after “?” in Express?Modify Node.js req object parametersAccess “app” variable inside of ExpressJS/ConnectJS middleware?Node.js Express app - request objectAngular Http Module considered middleware?Session variables in ExpressJSAdd properties to the req object in expressjs with Typescript