Submitting insert query from beeline failing on executor server with java 11

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Submitting insert query from beeline failing on executor server with java 11

kaki mahesh raja
HI All,

We have compiled spark with java 11 ("11.0.9.1") and when testing the thrift
server we are seeing that insert query from operator using beeline failing
with the below error.

{"type":"log", "level":"ERROR", "time":"2021-03-15T05:06:09.559Z",
"timezone":"UTC", "log":"Uncaught exception in thread
blk_1077144750_3404529@[DatanodeInfoWithStorage[10.75.47.159:1044,DS-1678921c-3fe6-4015-9849-bd1223c23369,DISK],
DatanodeInfoWithStorage[10.75.47.158:1044,DS-0b440eb7-fa7e-4ad8-bb5a-cdc50f3e7660,DISK]]"}
java.lang.NoSuchMethodError: 'sun.misc.Cleaner
sun.nio.ch.DirectBuffer.cleaner()'
        at
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:40)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:780)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:322)
~[hadoop-common-2.10.1.jar:?]
        at java.io.FilterInputStream.close(FilterInputStream.java:180)
~[?:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:1003)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:845)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:840)
~[hadoop-hdfs-client-2.10.1.jar:?]
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.570Z",
"timezone":"UTC", "log":"unwrapping token of length:54"}
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.599Z",
"timezone":"UTC", "log":"IPC Client (1437736861) connection to
vm-10-75-47-157/10.75.47.157:8020 from cspk got value #4"}

Any inputs on how to fix this issue would be helpful for us.

Thanks and Regards,
kaki mahesh raja



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Submitting insert query from beeline failing on executor server with java 11

srowen
That looks like you didn't compile with Java 11 actually. How did you try to do so?

On Tue, Mar 16, 2021, 7:50 AM kaki mahesh raja <[hidden email]> wrote:
HI All,

We have compiled spark with java 11 ("11.0.9.1") and when testing the thrift
server we are seeing that insert query from operator using beeline failing
with the below error.

{"type":"log", "level":"ERROR", "time":"2021-03-15T05:06:09.559Z",
"timezone":"UTC", "log":"Uncaught exception in thread
blk_1077144750_3404529@[DatanodeInfoWithStorage[10.75.47.159:1044,DS-1678921c-3fe6-4015-9849-bd1223c23369,DISK],
DatanodeInfoWithStorage[10.75.47.158:1044,DS-0b440eb7-fa7e-4ad8-bb5a-cdc50f3e7660,DISK]]"}
java.lang.NoSuchMethodError: 'sun.misc.Cleaner
sun.nio.ch.DirectBuffer.cleaner()'
        at
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:40)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:780)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:322)
~[hadoop-common-2.10.1.jar:?]
        at java.io.FilterInputStream.close(FilterInputStream.java:180)
~[?:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:1003)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:845)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:840)
~[hadoop-hdfs-client-2.10.1.jar:?]
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.570Z",
"timezone":"UTC", "log":"unwrapping token of length:54"}
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.599Z",
"timezone":"UTC", "log":"IPC Client (1437736861) connection to
vm-10-75-47-157/10.75.47.157:8020 from cspk got value #4"}

Any inputs on how to fix this issue would be helpful for us.

Thanks and Regards,
kaki mahesh raja



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Submitting insert query from beeline failing on executor server with java 11

Jungtaek Lim-2
Hadoop 2.x doesn't support JDK 11. See Hadoop Java version compatibility with JDK:


That said, you'll need to use Spark 3.x with Hadoop 3.1 profile to make Spark work with JDK 11.

On Tue, Mar 16, 2021 at 10:06 PM Sean Owen <[hidden email]> wrote:
That looks like you didn't compile with Java 11 actually. How did you try to do so?

On Tue, Mar 16, 2021, 7:50 AM kaki mahesh raja <[hidden email]> wrote:
HI All,

We have compiled spark with java 11 ("11.0.9.1") and when testing the thrift
server we are seeing that insert query from operator using beeline failing
with the below error.

{"type":"log", "level":"ERROR", "time":"2021-03-15T05:06:09.559Z",
"timezone":"UTC", "log":"Uncaught exception in thread
blk_1077144750_3404529@[DatanodeInfoWithStorage[10.75.47.159:1044,DS-1678921c-3fe6-4015-9849-bd1223c23369,DISK],
DatanodeInfoWithStorage[10.75.47.158:1044,DS-0b440eb7-fa7e-4ad8-bb5a-cdc50f3e7660,DISK]]"}
java.lang.NoSuchMethodError: 'sun.misc.Cleaner
sun.nio.ch.DirectBuffer.cleaner()'
        at
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:40)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:780)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:322)
~[hadoop-common-2.10.1.jar:?]
        at java.io.FilterInputStream.close(FilterInputStream.java:180)
~[?:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:1003)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:845)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:840)
~[hadoop-hdfs-client-2.10.1.jar:?]
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.570Z",
"timezone":"UTC", "log":"unwrapping token of length:54"}
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.599Z",
"timezone":"UTC", "log":"IPC Client (1437736861) connection to
vm-10-75-47-157/10.75.47.157:8020 from cspk got value #4"}

Any inputs on how to fix this issue would be helpful for us.

Thanks and Regards,
kaki mahesh raja



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Submitting insert query from beeline failing on executor server with java 11

Jungtaek Lim-2
Hmm... I read the page again, and it looks like we are in a gray area.

Hadoop community supports JDK 11 starting from Hadoop 3.3, while we haven't reached adding Hadoop 3.3 as a dependency. It may not make a real issue on runtime with Hadoop 3.x as Spark is using a part of Hadoop (client layer), but worth to know in any way that it's not in official support from the Hadoop community.

On Wed, Mar 17, 2021 at 6:54 AM Jungtaek Lim <[hidden email]> wrote:
Hadoop 2.x doesn't support JDK 11. See Hadoop Java version compatibility with JDK:


That said, you'll need to use Spark 3.x with Hadoop 3.1 profile to make Spark work with JDK 11.

On Tue, Mar 16, 2021 at 10:06 PM Sean Owen <[hidden email]> wrote:
That looks like you didn't compile with Java 11 actually. How did you try to do so?

On Tue, Mar 16, 2021, 7:50 AM kaki mahesh raja <[hidden email]> wrote:
HI All,

We have compiled spark with java 11 ("11.0.9.1") and when testing the thrift
server we are seeing that insert query from operator using beeline failing
with the below error.

{"type":"log", "level":"ERROR", "time":"2021-03-15T05:06:09.559Z",
"timezone":"UTC", "log":"Uncaught exception in thread
blk_1077144750_3404529@[DatanodeInfoWithStorage[10.75.47.159:1044,DS-1678921c-3fe6-4015-9849-bd1223c23369,DISK],
DatanodeInfoWithStorage[10.75.47.158:1044,DS-0b440eb7-fa7e-4ad8-bb5a-cdc50f3e7660,DISK]]"}
java.lang.NoSuchMethodError: 'sun.misc.Cleaner
sun.nio.ch.DirectBuffer.cleaner()'
        at
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:40)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:780)
~[hadoop-common-2.10.1.jar:?]
        at
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:322)
~[hadoop-common-2.10.1.jar:?]
        at java.io.FilterInputStream.close(FilterInputStream.java:180)
~[?:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:1003)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at
org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:845)
~[hadoop-hdfs-client-2.10.1.jar:?]
        at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:840)
~[hadoop-hdfs-client-2.10.1.jar:?]
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.570Z",
"timezone":"UTC", "log":"unwrapping token of length:54"}
{"type":"log", "level":"DEBUG", "time":"2021-03-15T05:06:09.599Z",
"timezone":"UTC", "log":"IPC Client (1437736861) connection to
vm-10-75-47-157/10.75.47.157:8020 from cspk got value #4"}

Any inputs on how to fix this issue would be helpful for us.

Thanks and Regards,
kaki mahesh raja



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Submitting insert query from beeline failing on executor server with java 11

kaki mahesh raja
HI Jungtaek Lim ,

Thanks for the response, so we have no option only to wait till hadoop
officially supports java 11.


Thanks and regards,
kaki mahesh raja



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]