|
|
thx,but ` hive.stats.autogather` is not work for sparkSQL. ----- 原始邮件 ----- 发件人:Mich Talebzadeh < [hidden email]> 收件人: [hidden email]抄送人:user < [hidden email]> 主题:Re: Is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true 日期:2020年12月19日 06点45分 Hi,
A fellow forum member kindly spotted a lousy error of mine, where a comma was missing at the line above the red line.
This appears to be accepted
spark = SparkSession.builder \ .appName("app1") \ .enableHiveSupport() \ .getOrCreate() # Hive settings settings = [ ("hive.exec.dynamic.partition", "true"), ("hive.exec.dynamic.partition.mode", "nonstrict"), ("spark.sql.orc.filterPushdown", "true"), ("hive.msck.path.validation", "ignore"), ("spark.sql.caseSensitive", "true"), ("spark.speculation", "false"), ("hive.metastore.authorization.storage.checks", "false"), ("hive.metastore.client.connect.retry.delay", "5s"), ("hive.metastore.client.socket.timeout", "1800s"), ("hive.metastore.connect.retries", "12"), ("hive.metastore.execute.setugi", "false"), ("hive.metastore.failure.retries", "12"), ("hive.metastore.schema.verification", "false"), ("hive.metastore.schema.verification.record.version", "false"), ("hive.metastore.server.max.threads", "100000"), ("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse"), ("hive.stats.autogather", "true") ] spark.sparkContext._conf.setAll(settings) However, I have not tested it myself.
HTH
Mich
LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
I am afraid not supported for spark sql
I tried it as below
spark = SparkSession.builder \ .appName("app1") \ .enableHiveSupport() \ .getOrCreate() # Hive settings settings = [ ("hive.exec.dynamic.partition", "true"), ("hive.exec.dynamic.partition.mode", "nonstrict"), ("spark.sql.orc.filterPushdown", "true"), ("hive.msck.path.validation", "ignore"), ("spark.sql.caseSensitive", "true"), ("spark.speculation", "false"), ("hive.metastore.authorization.storage.checks", "false"), ("hive.metastore.client.connect.retry.delay", "5s"), ("hive.metastore.client.socket.timeout", "1800s"), ("hive.metastore.connect.retries", "12"), ("hive.metastore.execute.setugi", "false"), ("hive.metastore.failure.retries", "12"), ("hive.metastore.schema.verification", "false"), ("hive.metastore.schema.verification.record.version", "false"), ("hive.metastore.server.max.threads", "100000"), ("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse") ("hive.stats.autogather", "true") ] spark.sparkContext._conf.setAll(settings)
got this error
("hive.stats.autogather", "true") TypeError: 'tuple' object is not callable
HTH
LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
`spark.sql.statistics.size.autoUpdate.enabled` is only work for table stats update.But for partition stats,I can only update it with `ANALYZE TABLE tablename PARTITION(part) COMPUTE STATISTICS`.So is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true?
|
|
Ok if not working then you need to find a work around to update stats before
if (spark.sql(f"""SHOW TABLES IN {v.DB} like '{v.tableName}'""").count() == 1): spark.sql(f"""ANALYZE TABLE {v.fullyQualifiedTableName} compute statistics""") rows = spark.sql(f"""SELECT COUNT(1) FROM {v.fullyQualifiedTableName}""").collect()[0][0] print ("number of rows is ",rows) else: print(f"\nTable {v.fullyQualifiedTableName} does not exist, creating table ") HTH

LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
thx,but `hive.stats.autogather` is not work for sparkSQL.
----- 原始邮件 ----- 发件人:Mich Talebzadeh < [hidden email]> 收件人: [hidden email]抄送人:user < [hidden email]> 主题:Re: Is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true 日期:2020年12月19日 06点45分 Hi,
A fellow forum member kindly spotted a lousy error of mine, where a comma was missing at the line above the red line.
This appears to be accepted
spark = SparkSession.builder \ .appName("app1") \ .enableHiveSupport() \ .getOrCreate() # Hive settings settings = [ ("hive.exec.dynamic.partition", "true"), ("hive.exec.dynamic.partition.mode", "nonstrict"), ("spark.sql.orc.filterPushdown", "true"), ("hive.msck.path.validation", "ignore"), ("spark.sql.caseSensitive", "true"), ("spark.speculation", "false"), ("hive.metastore.authorization.storage.checks", "false"), ("hive.metastore.client.connect.retry.delay", "5s"), ("hive.metastore.client.socket.timeout", "1800s"), ("hive.metastore.connect.retries", "12"), ("hive.metastore.execute.setugi", "false"), ("hive.metastore.failure.retries", "12"), ("hive.metastore.schema.verification", "false"), ("hive.metastore.schema.verification.record.version", "false"), ("hive.metastore.server.max.threads", "100000"), ("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse"), ("hive.stats.autogather", "true") ] spark.sparkContext._conf.setAll(settings) However, I have not tested it myself.
HTH
Mich
LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
I am afraid not supported for spark sql
I tried it as below
spark = SparkSession.builder \ .appName("app1") \ .enableHiveSupport() \ .getOrCreate() # Hive settings settings = [ ("hive.exec.dynamic.partition", "true"), ("hive.exec.dynamic.partition.mode", "nonstrict"), ("spark.sql.orc.filterPushdown", "true"), ("hive.msck.path.validation", "ignore"), ("spark.sql.caseSensitive", "true"), ("spark.speculation", "false"), ("hive.metastore.authorization.storage.checks", "false"), ("hive.metastore.client.connect.retry.delay", "5s"), ("hive.metastore.client.socket.timeout", "1800s"), ("hive.metastore.connect.retries", "12"), ("hive.metastore.execute.setugi", "false"), ("hive.metastore.failure.retries", "12"), ("hive.metastore.schema.verification", "false"), ("hive.metastore.schema.verification.record.version", "false"), ("hive.metastore.server.max.threads", "100000"), ("hive.metastore.authorization.storage.checks", "/apps/hive/warehouse") ("hive.stats.autogather", "true") ] spark.sparkContext._conf.setAll(settings)
got this error
("hive.stats.autogather", "true") TypeError: 'tuple' object is not callable
HTH
LinkedIn https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction
of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from such
loss, damage or destruction.
`spark.sql.statistics.size.autoUpdate.enabled` is only work for table stats update.But for partition stats,I can only update it with `ANALYZE TABLE tablename PARTITION(part) COMPUTE STATISTICS`.So is Spark SQL able to auto update partition stats like hive by setting hive.stats.autogather=true?
|
|