Client read exception

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Client read exception

Reid Chan
public void writePacket(final ByteBuf buf) throws IOException {
  try {
    Preconditions.checkState(!mClosed, "PacketWriter is closed while writing packets.");
    int sz = buf.readableBytes();
    ensureReserved(mPos + sz);
    mPos += sz;
    Preconditions.checkState(buf.readBytes(mWriter.getChannel(), sz) == sz);
  } finally {
    buf.release();
  }
}

 
Caused by: java.lang.IllegalStateException
at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at alluxio.client.block.stream.LocalFilePacketWriter.writePacket(LocalFilePacketWriter.java:123)
at alluxio.client.block.stream.BlockOutStream.updateCurrentPacket(BlockOutStream.java:202)
at alluxio.client.block.stream.BlockOutStream.write(BlockOutStream.java:122)
at alluxio.client.file.FileInStream.readInternal(FileInStream.java:261)
at alluxio.client.file.FileInStream.readCurrentBlockToPos(FileInStream.java:796)
at alluxio.client.file.FileInStream.seekInternalWithCachingPartiallyReadBlock(FileInStream.java:751)
at alluxio.client.file.FileInStream.seek(FileInStream.java:369)
at alluxio.hadoop.HdfsFileInputStream.seek(HdfsFileInputStream.java:159)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:65)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:362)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:311)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:228)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:958)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:870)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:736)


My ad-hoc client always has exception above. The related snippet is pasted above.

Any hint why ByteBuf can't alway fully transfer the data into channel?

Alluxio version is 1.6.1.

Thanks in advance.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Client read exception

binfan
Administrator
Is it possible that the local file is full?

Also, do you see the same issue for 1.8.0?
I assume there are bug fixes on client side after 1.6 into 1.8

- Bin

On Tuesday, August 21, 2018 at 3:51:27 AM UTC-7, Reid Chan wrote:
public void writePacket(final ByteBuf buf) throws IOException {
  try {
    Preconditions.checkState(!mClosed, "PacketWriter is closed while writing packets.");
    int sz = buf.readableBytes();
    ensureReserved(mPos + sz);
    mPos += sz;
    Preconditions.checkState(buf.readBytes(mWriter.getChannel(), sz) == sz);
  } finally {
    buf.release();
  }
}

 
Caused by: java.lang.IllegalStateException
at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at alluxio.client.block.stream.LocalFilePacketWriter.writePacket(LocalFilePacketWriter.java:123)
at alluxio.client.block.stream.BlockOutStream.updateCurrentPacket(BlockOutStream.java:202)
at alluxio.client.block.stream.BlockOutStream.write(BlockOutStream.java:122)
at alluxio.client.file.FileInStream.readInternal(FileInStream.java:261)
at alluxio.client.file.FileInStream.readCurrentBlockToPos(FileInStream.java:796)
at alluxio.client.file.FileInStream.seekInternalWithCachingPartiallyReadBlock(FileInStream.java:751)
at alluxio.client.file.FileInStream.seek(FileInStream.java:369)
at alluxio.hadoop.HdfsFileInputStream.seek(HdfsFileInputStream.java:159)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:65)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:362)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:311)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:228)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:958)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:870)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:736)


My ad-hoc client always has exception above. The related snippet is pasted above.

Any hint why ByteBuf can't alway fully transfer the data into channel?

Alluxio version is 1.6.1.

Thanks in advance.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Client read exception

Reid Chan
The local file is in ramdisk, and the remaining space is enough for writing.


在 2018年8月22日星期三 UTC+8上午1:45:56,Bin Fan写道:
Is it possible that the local file is full?

Also, do you see the same issue for 1.8.0?
I assume there are bug fixes on client side after 1.6 into 1.8

- Bin

On Tuesday, August 21, 2018 at 3:51:27 AM UTC-7, Reid Chan wrote:
public void writePacket(final ByteBuf buf) throws IOException {
  try {
    Preconditions.checkState(!mClosed, "PacketWriter is closed while writing packets.");
    int sz = buf.readableBytes();
    ensureReserved(mPos + sz);
    mPos += sz;
    Preconditions.checkState(buf.readBytes(mWriter.getChannel(), sz) == sz);
  } finally {
    buf.release();
  }
}

 
Caused by: java.lang.IllegalStateException
at com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at alluxio.client.block.stream.LocalFilePacketWriter.writePacket(LocalFilePacketWriter.java:123)
at alluxio.client.block.stream.BlockOutStream.updateCurrentPacket(BlockOutStream.java:202)
at alluxio.client.block.stream.BlockOutStream.write(BlockOutStream.java:122)
at alluxio.client.file.FileInStream.readInternal(FileInStream.java:261)
at alluxio.client.file.FileInStream.readCurrentBlockToPos(FileInStream.java:796)
at alluxio.client.file.FileInStream.seekInternalWithCachingPartiallyReadBlock(FileInStream.java:751)
at alluxio.client.file.FileInStream.seek(FileInStream.java:369)
at alluxio.hadoop.HdfsFileInputStream.seek(HdfsFileInputStream.java:159)
at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:65)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:362)
at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:311)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:228)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.populateAndCacheStripeDetails(OrcInputFormat.java:958)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:870)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$SplitGenerator.call(OrcInputFormat.java:736)


My ad-hoc client always has exception above. The related snippet is pasted above.

Any hint why ByteBuf can't alway fully transfer the data into channel?

Alluxio version is 1.6.1.

Thanks in advance.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Client read exception

Calvin Jia
Are any bytes written to the local file, if there are no bytes written, it could be a permissions related issue (though ramdisk default is 777)?

Thanks,
Calvin

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.