Spark throws AnalysisException: Undefined function: 'count' for spark built in functionCannot save java date to CassandraAppend transformed columns to spark dataframe using scalaI did alter table by hiveql. Then It didn't work to show table by spark-sql. ERROR : Path does not existUnaryTransformer instance throwing ClassCastExceptionfind a specific element in nested XML File Spark scalaCross join runtime error: Use the CROSS JOIN syntax to allow cartesian products between these relationsQuerying Cassandra data using Spark SQL in ScalaSpark Hive reporting java.lang.NoSuchMethodError: org.apache.hadoop.hive.metastore.api.Table.setTableName(Ljava/lang/String;)VSpark - Apply a function to a column of a dataframe without UDFZeppelin - Spark Interpreter Unable to create hive table by using CTAS (Create Table as Select …) statement

Why does Intel's Haswell chip allow FP multiplication to be twice as fast as addition?

Withdrew when Jimmy met up with Heath

Trying to write a shell script that keeps testing a server remotely, but it keeps falling in else statement when I logout

What is the length of pair of wires after twisting them around each other?

Double redundancy for the Saturn V LVDC computer memory, how were disagreements resolved?

constant evaluation when using differential equations.

What does Apple mean by "This may decrease battery life"?

Can you castle with a "ghost" rook?

What game uses dice with sides powers of 2?

Is it okay for a ticket seller to grab a tip in the USA?

Plausibility of Ice Eaters in the Arctic

Is this curved text blend possible in Illustrator?

Why are Gatwick's runways too close together?

Is there a standardised way to check fake news?

A tool to replace all words with antonyms

Y2K... in 2019?

Why isn’t SHA-3 in wider use?

In a 2 layer PCB with a top layer densely populated, from an EMI & EMC point of view should the ground plane be on top, bottom or both and why?

Not going forward with internship interview process

Bitcoin successfully deducted on sender wallet but did not reach receiver wallet

Is Texas Instrument wrong with their pin number on TO-92 package?

Understanding the point of a kölsche Witz

create a tuple from pairs

Can a fight scene, component-wise, be too complex and complicated?



Spark throws AnalysisException: Undefined function: 'count' for spark built in function


Cannot save java date to CassandraAppend transformed columns to spark dataframe using scalaI did alter table by hiveql. Then It didn't work to show table by spark-sql. ERROR : Path does not existUnaryTransformer instance throwing ClassCastExceptionfind a specific element in nested XML File Spark scalaCross join runtime error: Use the CROSS JOIN syntax to allow cartesian products between these relationsQuerying Cassandra data using Spark SQL in ScalaSpark Hive reporting java.lang.NoSuchMethodError: org.apache.hadoop.hive.metastore.api.Table.setTableName(Ljava/lang/String;)VSpark - Apply a function to a column of a dataframe without UDFZeppelin - Spark Interpreter Unable to create hive table by using CTAS (Create Table as Select …) statement






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








0















If i run the following code in spark ( 2.3.2.0-mapr-1901) , it runs fine on the first run.



SELECT count( `cpu-usage` ) as `cpu-usage-count` , sum( `cpu-usage` ) as `cpu-usage-sum` , percentile_approx( `cpu-usage`, 0.95 ) as `cpu-usage-approxPercentile` 
FROM filtered_set


Where filtered_set is a DataFrame that has been registered as a temp view using createOrReplaceTempView.



I get a result and all is good on the first call. But...



If i then run this job again, ( note that this is a shared spark context, managed via apache livy), Spark throws:



Wrapped by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: Undefined function: 'count'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 2 pos 10
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1215)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1213)
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)

...

org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)


This problem occurs on the second run of the Livy job ( which is using the previous Spark session). It is not isolated just to the count function, (etc also happens with sum, etc) and any function appears to fail on the second run, regardless of what was called in the first run.



It seems like Spark's function registry is being cleared out ( including the default built in functions). We're not doing anything with the spark context.



Questions:
- Is this expected or normal behaviour with spark?
- How would I re-set or initialise the spark session so it doesn't lose all these functions?



I have seen Undefined function errors described elsewhere in terms of user defined functions but never the built ins.










share|improve this question
























  • Does sum still work? You don't have any variable called count, right?

    – Shaido
    Mar 27 at 9:23











  • No all the functions don't work second time around. The data inputs are identical on both calls. I certainly don't intentionally have such a variable. This issue looks identical to this: forums.databricks.com/answers/17583/view.html . as I am using temp views and getting the same issue.

    – ZenMasterZed
    Mar 27 at 9:37

















0















If i run the following code in spark ( 2.3.2.0-mapr-1901) , it runs fine on the first run.



SELECT count( `cpu-usage` ) as `cpu-usage-count` , sum( `cpu-usage` ) as `cpu-usage-sum` , percentile_approx( `cpu-usage`, 0.95 ) as `cpu-usage-approxPercentile` 
FROM filtered_set


Where filtered_set is a DataFrame that has been registered as a temp view using createOrReplaceTempView.



I get a result and all is good on the first call. But...



If i then run this job again, ( note that this is a shared spark context, managed via apache livy), Spark throws:



Wrapped by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: Undefined function: 'count'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 2 pos 10
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1215)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1213)
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)

...

org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)


This problem occurs on the second run of the Livy job ( which is using the previous Spark session). It is not isolated just to the count function, (etc also happens with sum, etc) and any function appears to fail on the second run, regardless of what was called in the first run.



It seems like Spark's function registry is being cleared out ( including the default built in functions). We're not doing anything with the spark context.



Questions:
- Is this expected or normal behaviour with spark?
- How would I re-set or initialise the spark session so it doesn't lose all these functions?



I have seen Undefined function errors described elsewhere in terms of user defined functions but never the built ins.










share|improve this question
























  • Does sum still work? You don't have any variable called count, right?

    – Shaido
    Mar 27 at 9:23











  • No all the functions don't work second time around. The data inputs are identical on both calls. I certainly don't intentionally have such a variable. This issue looks identical to this: forums.databricks.com/answers/17583/view.html . as I am using temp views and getting the same issue.

    – ZenMasterZed
    Mar 27 at 9:37













0












0








0








If i run the following code in spark ( 2.3.2.0-mapr-1901) , it runs fine on the first run.



SELECT count( `cpu-usage` ) as `cpu-usage-count` , sum( `cpu-usage` ) as `cpu-usage-sum` , percentile_approx( `cpu-usage`, 0.95 ) as `cpu-usage-approxPercentile` 
FROM filtered_set


Where filtered_set is a DataFrame that has been registered as a temp view using createOrReplaceTempView.



I get a result and all is good on the first call. But...



If i then run this job again, ( note that this is a shared spark context, managed via apache livy), Spark throws:



Wrapped by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: Undefined function: 'count'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 2 pos 10
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1215)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1213)
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)

...

org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)


This problem occurs on the second run of the Livy job ( which is using the previous Spark session). It is not isolated just to the count function, (etc also happens with sum, etc) and any function appears to fail on the second run, regardless of what was called in the first run.



It seems like Spark's function registry is being cleared out ( including the default built in functions). We're not doing anything with the spark context.



Questions:
- Is this expected or normal behaviour with spark?
- How would I re-set or initialise the spark session so it doesn't lose all these functions?



I have seen Undefined function errors described elsewhere in terms of user defined functions but never the built ins.










share|improve this question














If i run the following code in spark ( 2.3.2.0-mapr-1901) , it runs fine on the first run.



SELECT count( `cpu-usage` ) as `cpu-usage-count` , sum( `cpu-usage` ) as `cpu-usage-sum` , percentile_approx( `cpu-usage`, 0.95 ) as `cpu-usage-approxPercentile` 
FROM filtered_set


Where filtered_set is a DataFrame that has been registered as a temp view using createOrReplaceTempView.



I get a result and all is good on the first call. But...



If i then run this job again, ( note that this is a shared spark context, managed via apache livy), Spark throws:



Wrapped by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.apache.spark.sql.AnalysisException: Undefined function: 'count'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.; line 2 pos 10
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15$$anonfun$applyOrElse$50.apply(Analyzer.scala:1216)
org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1215)
org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1213)
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)

...

org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
org.apache.spark.sql.SQLContext.sql(SQLContext.scala:694)


This problem occurs on the second run of the Livy job ( which is using the previous Spark session). It is not isolated just to the count function, (etc also happens with sum, etc) and any function appears to fail on the second run, regardless of what was called in the first run.



It seems like Spark's function registry is being cleared out ( including the default built in functions). We're not doing anything with the spark context.



Questions:
- Is this expected or normal behaviour with spark?
- How would I re-set or initialise the spark session so it doesn't lose all these functions?



I have seen Undefined function errors described elsewhere in terms of user defined functions but never the built ins.







apache-spark livy






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 27 at 8:32









ZenMasterZedZenMasterZed

836 bronze badges




836 bronze badges















  • Does sum still work? You don't have any variable called count, right?

    – Shaido
    Mar 27 at 9:23











  • No all the functions don't work second time around. The data inputs are identical on both calls. I certainly don't intentionally have such a variable. This issue looks identical to this: forums.databricks.com/answers/17583/view.html . as I am using temp views and getting the same issue.

    – ZenMasterZed
    Mar 27 at 9:37

















  • Does sum still work? You don't have any variable called count, right?

    – Shaido
    Mar 27 at 9:23











  • No all the functions don't work second time around. The data inputs are identical on both calls. I certainly don't intentionally have such a variable. This issue looks identical to this: forums.databricks.com/answers/17583/view.html . as I am using temp views and getting the same issue.

    – ZenMasterZed
    Mar 27 at 9:37
















Does sum still work? You don't have any variable called count, right?

– Shaido
Mar 27 at 9:23





Does sum still work? You don't have any variable called count, right?

– Shaido
Mar 27 at 9:23













No all the functions don't work second time around. The data inputs are identical on both calls. I certainly don't intentionally have such a variable. This issue looks identical to this: forums.databricks.com/answers/17583/view.html . as I am using temp views and getting the same issue.

– ZenMasterZed
Mar 27 at 9:37





No all the functions don't work second time around. The data inputs are identical on both calls. I certainly don't intentionally have such a variable. This issue looks identical to this: forums.databricks.com/answers/17583/view.html . as I am using temp views and getting the same issue.

– ZenMasterZed
Mar 27 at 9:37












0






active

oldest

votes










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372752%2fspark-throws-analysisexception-undefined-function-count-for-spark-built-in-f%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes




Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.







Is this question similar to what you get asked at work? Learn more about asking and sharing private information with your coworkers using Stack Overflow for Teams.



















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55372752%2fspark-throws-analysisexception-undefined-function-count-for-spark-built-in-f%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Kamusi Yaliyomo Aina za kamusi | Muundo wa kamusi | Faida za kamusi | Dhima ya picha katika kamusi | Marejeo | Tazama pia | Viungo vya nje | UrambazajiKuhusu kamusiGo-SwahiliWiki-KamusiKamusi ya Kiswahili na Kiingerezakuihariri na kuongeza habari

SQL error code 1064 with creating Laravel foreign keysForeign key constraints: When to use ON UPDATE and ON DELETEDropping column with foreign key Laravel error: General error: 1025 Error on renameLaravel SQL Can't create tableLaravel Migration foreign key errorLaravel php artisan migrate:refresh giving a syntax errorSQLSTATE[42S01]: Base table or view already exists or Base table or view already exists: 1050 Tableerror in migrating laravel file to xampp serverSyntax error or access violation: 1064:syntax to use near 'unsigned not null, modelName varchar(191) not null, title varchar(191) not nLaravel cannot create new table field in mysqlLaravel 5.7:Last migration creates table but is not registered in the migration table

은진 송씨 목차 역사 본관 분파 인물 조선 왕실과의 인척 관계 집성촌 항렬자 인구 같이 보기 각주 둘러보기 메뉴은진 송씨세종실록 149권, 지리지 충청도 공주목 은진현