|
|
`spark.sql.statistics.size.autoUpdate.enabled` is only work for table stats update.But for partition stats,I can only update it with `ANALYZE TABLE tablename PARTITION(part) COMPUTE STATISTICS`.So is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true?
|
|
I am afraid not supported for spark sql
I tried it as below
spark = SparkSession.builder \ .appName("app1") \ .enableHiveSupport() \ .getOrCreate() # Hive settings settings = [ ("hive.exec.dynamic.partition", "true"), ("hive.exec.dynamic.partition.mode", "nonstrict"), ("spark.sql.orc.filterPushdown", "true"), ("hive.msck.path.validation", "ignore"), ("spark.sql.caseSensitive", "true"), ("spark.speculation", "false"), ("hive.metastore.authorization.storage.checks", "false"), ("hive.metastore.client.connect.retry.delay", "5s"), ("hive.metastore.client.socket.timeout", "1800s"), ("hive.metastore.connect.retries", "12"), ("hive.metastore.execute.setugi", "false"), ("hive.metastore.failure.retries", "12"), ("hive.metastore.schema.verification", "false"), ("hive.metastore.schema.verification.record.version", "false"), ("hive.metastore.server.max.threads", "100000"), ("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse") ("hive.stats.autogather", "true") ] spark.sparkContext._conf.setAll(settings)
got this error
("hive.stats.autogather", "true") TypeError: 'tuple' object is not callable
HTH
LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
`spark.sql.statistics.size.autoUpdate.enabled` is only work for table stats update.But for partition stats,I can only update it with `ANALYZE TABLE tablename PARTITION(part) COMPUTE STATISTICS`.So is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true?
|
|
Hi,
A fellow forum member kindly spotted a lousy error of mine, where a comma was missing at the line above the red line.
This appears to be accepted
spark = SparkSession.builder \ .appName("app1") \ .enableHiveSupport() \ .getOrCreate() # Hive settings settings = [ ("hive.exec.dynamic.partition", "true"), ("hive.exec.dynamic.partition.mode", "nonstrict"), ("spark.sql.orc.filterPushdown", "true"), ("hive.msck.path.validation", "ignore"), ("spark.sql.caseSensitive", "true"), ("spark.speculation", "false"), ("hive.metastore.authorization.storage.checks", "false"), ("hive.metastore.client.connect.retry.delay", "5s"), ("hive.metastore.client.socket.timeout", "1800s"), ("hive.metastore.connect.retries", "12"), ("hive.metastore.execute.setugi", "false"), ("hive.metastore.failure.retries", "12"), ("hive.metastore.schema.verification", "false"), ("hive.metastore.schema.verification.record.version", "false"), ("hive.metastore.server.max.threads", "100000"), ("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse"), ("hive.stats.autogather", "true") ] spark.sparkContext._conf.setAll(settings) However, I have not tested it myself.
HTH
Mich
LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
I am afraid not supported for spark sql
I tried it as below
spark = SparkSession.builder \ .appName("app1") \ .enableHiveSupport() \ .getOrCreate() # Hive settings settings = [ ("hive.exec.dynamic.partition", "true"), ("hive.exec.dynamic.partition.mode", "nonstrict"), ("spark.sql.orc.filterPushdown", "true"), ("hive.msck.path.validation", "ignore"), ("spark.sql.caseSensitive", "true"), ("spark.speculation", "false"), ("hive.metastore.authorization.storage.checks", "false"), ("hive.metastore.client.connect.retry.delay", "5s"), ("hive.metastore.client.socket.timeout", "1800s"), ("hive.metastore.connect.retries", "12"), ("hive.metastore.execute.setugi", "false"), ("hive.metastore.failure.retries", "12"), ("hive.metastore.schema.verification", "false"), ("hive.metastore.schema.verification.record.version", "false"), ("hive.metastore.server.max.threads", "100000"), ("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse") ("hive.stats.autogather", "true") ] spark.sparkContext._conf.setAll(settings)
got this error
("hive.stats.autogather", "true") TypeError: 'tuple' object is not callable
HTH
LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
`spark.sql.statistics.size.autoUpdate.enabled` is only work for table stats update.But for partition stats,I can only update it with `ANALYZE TABLE tablename PARTITION(part) COMPUTE STATISTICS`.So is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true?
|
|