hadoop job failed when reading file from Alluxio

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

hadoop job failed when reading file from Alluxio

TeddyBear1314
java.lang.IllegalArgumentException: offset=0, length=8388608, exceeding fileSize=585
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:119)
at alluxio.worker.block.io.LocalFileBlockReader.read(LocalFileBlockReader.java:61)
at alluxio.client.block.LocalBlockInStream.bufferedRead(LocalBlockInStream.java:105)
at alluxio.client.block.BufferedBlockInStream.updateBuffer(BufferedBlockInStream.java:228)
at alluxio.client.block.BufferedBlockInStream.read(BufferedBlockInStream.java:129)
at alluxio.client.file.FileInStream.read(FileInStream.java:208)
at alluxio.hadoop.HdfsFileInputStream.read(HdfsFileInputStream.java:185)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
I can read the file from a Java client, but when I use Hadoop to read it, this error happens,can anyone help me?Thanks!

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: hadoop job failed when reading file from Alluxio

Yupeng Fu
Hi,

What's the version of Alluxio and Hadoop that you are using?
Also, from the exception message, it seems the file length to read exceeds the file size. Can you double check the file size to make sure it meets your expectations?

Thanks,


On Tue, Aug 16, 2016 at 7:19 PM, 黄志 <[hidden email]> wrote:
java.lang.IllegalArgumentException: offset=0, length=8388608, exceeding fileSize=585
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:119)
at alluxio.worker.block.io.LocalFileBlockReader.read(LocalFileBlockReader.java:61)
at alluxio.client.block.LocalBlockInStream.bufferedRead(LocalBlockInStream.java:105)
at alluxio.client.block.BufferedBlockInStream.updateBuffer(BufferedBlockInStream.java:228)
at alluxio.client.block.BufferedBlockInStream.read(BufferedBlockInStream.java:129)
at alluxio.client.file.FileInStream.read(FileInStream.java:208)
at alluxio.hadoop.HdfsFileInputStream.read(HdfsFileInputStream.java:185)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
I can read the file from a Java client, but when I use Hadoop to read it, this error happens,can anyone help me?Thanks!

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: hadoop job failed when reading file from Alluxio

Yupeng Fu
Hi, 

Has your issue been solved?

Thanks,

On Tuesday, August 16, 2016 at 8:59:43 PM UTC-7, Yupeng Fu wrote:
Hi,

What's the version of Alluxio and Hadoop that you are using?
Also, from the exception message, it seems the file length to read exceeds the file size. Can you double check the file size to make sure it meets your expectations?

Thanks,

Yupeng

<a href="http://www.alluxio.com/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;">Alluxio Inc

On Tue, Aug 16, 2016 at 7:19 PM, 黄志 <[hidden email]> wrote:
java.lang.IllegalArgumentException: offset=0, length=8388608, exceeding fileSize=585
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:119)
at <a href="http://alluxio.worker.block.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Falluxio.worker.block.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNF6_fhDccoHppED6j1744ynPDteXA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Falluxio.worker.block.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNF6_fhDccoHppED6j1744ynPDteXA&#39;;return true;">alluxio.worker.block.io.LocalFileBlockReader.read(LocalFileBlockReader.java:61)
at alluxio.client.block.LocalBlockInStream.bufferedRead(LocalBlockInStream.java:105)
at alluxio.client.block.BufferedBlockInStream.updateBuffer(BufferedBlockInStream.java:228)
at alluxio.client.block.BufferedBlockInStream.read(BufferedBlockInStream.java:129)
at alluxio.client.file.FileInStream.read(FileInStream.java:208)
at alluxio.hadoop.HdfsFileInputStream.read(HdfsFileInputStream.java:185)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.skipUtfByteOrderMark(LineRecordReader.java:143)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:183)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
I can read the file from a Java client, but when I use Hadoop to read it, this error happens,can anyone help me?Thanks!

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;">https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.