页面树结构
转至元数据结尾
转至元数据起始

问题描述

客户一段sql在数据库中能查询出数据,用数据查询节点执行查询也无问题,但执行节点是报错

Caused by: org.apache.spark.SparkArithmeticException: [DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 8 exceeds max precision 7.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.decimalPrecisionExceedsMaxPrecisionError(QueryExecutionErrors.scala:1275) ~[spark-catalyst_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.types.Decimal.set(Decimal.scala:124) ~[spark-catalyst_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.types.Decimal$.apply(Decimal.scala:571) ~[spark-catalyst_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$4(JdbcUtils.scala:412) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.nullSafeConvert(JdbcUtils.scala:549) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$3(JdbcUtils.scala:412) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$3$adapted(JdbcUtils.scala:410) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) ~[?:?]
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:91) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:403) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1563) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:410) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:100) ~[spark-sql_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:888) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:888) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:328) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.scheduler.Task.run(Task.scala:139) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) ~[spark-core_2.12-3.4.1.jar:3.4.1]
	... 3 more

问题分析

[DECIMAL_PRECISION_EXCEEDS_MAX_PRECISION] Decimal precision 8 exceeds max precision 7.日志显示出Decimal的最大精度是7,然后出现了精度8位数。实际上Spark的Decimal类型最大位数是38位,引擎也未提供接口修改位数限制。注意到客户的数值是通过sql计算的,并没有指定计算结果的类型,有可能spark自行推导了结果类型,超出了推导类型的限制。

解决办法

在SQL中显示地指定结果的类型,比如SELECT CAST(123.45 AS DECIMAL(38,2)),显示地指定为 DECIMAL(38,2)类型。

  • 无标签