java.io.IOException: Block 5922357248 is not available in Alluxio

classic Classic list List threaded Threaded
21 messages Options
12
Reply | Threaded
Open this post in threaded view
|

java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
Any ideas what was go wrong?


On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
In reply to this post by Chanh Le
Any luck on that?
It happened again whatever I tried.


On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Yupeng Fu
Hi Chanh,

Which version are you using? It'll be helpful to see the logs from master and workers.

Thanks.


On Wed, Jul 27, 2016 at 7:45 AM, Chanh Le <[hidden email]> wrote:
Any luck on that?
It happened again whatever I tried.


On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
I uploaded all in here https://drive.google.com/open?id=0B9sW2781psiHQzVBUS1vQmtrQkE

On Jul 27, 2016, at 11:01 PM, Chanh Le <[hidden email]> wrote:

Hi Yupeng,

I am using Alluxio 1.2.0 RC-2 but this error also happened when I use 1.1.0, 1.1.1


<master.log>
<worker (1).log>
<task.log>
<user_root (1).log>

On Jul 27, 2016, at 10:25 PM, Yupeng Fu <[hidden email]> wrote:

Hi Chanh,

Which version are you using? It'll be helpful to see the logs from master and workers.

Thanks.


On Wed, Jul 27, 2016 at 7:45 AM, Chanh Le <[hidden email]> wrote:
Any luck on that?
It happened again whatever I tried.


On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



-- 
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

binfan
Administrator
Hi Chanh,

A few more questions to clarify:

- When you give multiple tries, do you always encounter this exception deterministically? And if yes, is it always complaining the same block?
- What Spark version are you running?
- What is your Alluxio setting? (e.g., did you customize your alluxio-env.sh or alluxio-site.properties file)?

On Wednesday, July 27, 2016 at 9:08:32 AM UTC-7, Chanh Le wrote:
I uploaded all in here <a href="https://drive.google.com/open?id=0B9sW2781psiHQzVBUS1vQmtrQkE" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://drive.google.com/open?id\x3d0B9sW2781psiHQzVBUS1vQmtrQkE&#39;;return true;" onclick="this.href=&#39;https://drive.google.com/open?id\x3d0B9sW2781psiHQzVBUS1vQmtrQkE&#39;;return true;">https://drive.google.com/open?id=0B9sW2781psiHQzVBUS1vQmtrQkE

On Jul 27, 2016, at 11:01 PM, Chanh Le <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="1-eBtIrQBAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">giao...@...> wrote:

Hi Yupeng,

I am using Alluxio 1.2.0 RC-2 but this error also happened when I use 1.1.0, 1.1.1


<master.log>
<worker (1).log>
<task.log>
<user_root (1).log>

On Jul 27, 2016, at 10:25 PM, Yupeng Fu <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="1-eBtIrQBAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">yup...@...> wrote:

Hi Chanh,

Which version are you using? It'll be helpful to see the logs from master and workers.

Thanks.

Yupeng

<a href="http://www.alluxio.com/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;">Alluxio Inc

On Wed, Jul 27, 2016 at 7:45 AM, Chanh Le <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="1-eBtIrQBAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">giao...@...> wrote:
Any luck on that?
It happened again whatever I tried.


On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEsrSqJnPY42d_TnsNNFk9bjGeL7w&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEsrSqJnPY42d_TnsNNFk9bjGeL7w&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



-- 
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdf-obfuscated-mailto="1-eBtIrQBAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">alluxio-user...@googlegroups.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;">https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

binfan
Administrator
More questions:

Do you use MUST_CACHE (which is the default write type) when you create the parquet files that you read and encounter issues later?

How large are your files and your Alluxio memory space size ? It looks to me the block in question is evicted before you read it, due to limited memory size you allocated to Alluxio

On Wednesday, July 27, 2016 at 10:48:18 AM UTC-7, Bin Fan wrote:
Hi Chanh,

A few more questions to clarify:

- When you give multiple tries, do you always encounter this exception deterministically? And if yes, is it always complaining the same block?
- What Spark version are you running?
- What is your Alluxio setting? (e.g., did you customize your alluxio-env.sh or alluxio-site.properties file)?

On Wednesday, July 27, 2016 at 9:08:32 AM UTC-7, Chanh Le wrote:
I uploaded all in here <a href="https://drive.google.com/open?id=0B9sW2781psiHQzVBUS1vQmtrQkE" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://drive.google.com/open?id\x3d0B9sW2781psiHQzVBUS1vQmtrQkE&#39;;return true;" onclick="this.href=&#39;https://drive.google.com/open?id\x3d0B9sW2781psiHQzVBUS1vQmtrQkE&#39;;return true;">https://drive.google.com/open?id=0B9sW2781psiHQzVBUS1vQmtrQkE

On Jul 27, 2016, at 11:01 PM, Chanh Le <[hidden email]> wrote:

Hi Yupeng,

I am using Alluxio 1.2.0 RC-2 but this error also happened when I use 1.1.0, 1.1.1


<master.log>
<worker (1).log>
<task.log>
<user_root (1).log>

On Jul 27, 2016, at 10:25 PM, Yupeng Fu <[hidden email]> wrote:

Hi Chanh,

Which version are you using? It'll be helpful to see the logs from master and workers.

Thanks.

Yupeng

<a href="http://www.alluxio.com/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;">Alluxio Inc

On Wed, Jul 27, 2016 at 7:45 AM, Chanh Le <[hidden email]> wrote:
Any luck on that?
It happened again whatever I tried.


On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn/" rel="nofollow" target="_blank" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEsrSqJnPY42d_TnsNNFk9bjGeL7w&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEsrSqJnPY42d_TnsNNFk9bjGeL7w&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



-- 
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-user...@googlegroups.com.
For more options, visit <a href="https://groups.google.com/d/optout" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;">https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
Hi Bin,


Hi Chanh,

A few more questions to clarify:

- When you give multiple tries, do you always encounter this exception deterministically? And if yes, is it always complaining the same block?

Yes. For different file and folder it shows a different block but not all of them appear error some of them are OK.

- What Spark version are you running?

I am using both 1.6.1 and 2.0.0

- What is your Alluxio setting? (e.g., did you customize your alluxio-env.sh or alluxio-site.properties file)?
I don’t customise the setting too much just change the user write policy to MostAvailablePolicy. I attached the files.

More questions:

Do you use MUST_CACHE (which is the default write type) when you create the parquet files that you read and encounter issues later?

How large are your files and your Alluxio memory space size ? It looks to me the block in question is evicted before you read it, due to limited memory size you allocated to Alluxio

I don’t change this property when I write to Alluxio so that is default MUST_CACHE.
The files I usually write about 100 MB - 500 MB. 
I checked the memory usage my worker still have a lot of memory left.

Thanks for supporting. 

Regards,
Chanh


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.


On Jul 28, 2016, at 1:07 AM, Bin Fan <[hidden email]> wrote:

More questions:

Do you use MUST_CACHE (which is the default write type) when you create the parquet files that you read and encounter issues later?

How large are your files and your Alluxio memory space size ? It looks to me the block in question is evicted before you read it, due to limited memory size you allocated to Alluxio

On Wednesday, July 27, 2016 at 10:48:18 AM UTC-7, Bin Fan wrote:
Hi Chanh,

A few more questions to clarify:

- When you give multiple tries, do you always encounter this exception deterministically? And if yes, is it always complaining the same block?
- What Spark version are you running?
- What is your Alluxio setting? (e.g., did you customize your alluxio-env.sh or alluxio-site.properties file)?

On Wednesday, July 27, 2016 at 9:08:32 AM UTC-7, Chanh Le wrote:
I uploaded all in here <a href="https://drive.google.com/open?id=0B9sW2781psiHQzVBUS1vQmtrQkE" rel="nofollow" target="_blank" onmousedown="this.href='https://drive.google.com/open?id\x3d0B9sW2781psiHQzVBUS1vQmtrQkE';return true;" onclick="this.href='https://drive.google.com/open?id\x3d0B9sW2781psiHQzVBUS1vQmtrQkE';return true;" class="">https://drive.google.com/open?id=0B9sW2781psiHQzVBUS1vQmtrQkE

On Jul 27, 2016, at 11:01 PM, Chanh Le <[hidden email]> wrote:

Hi Yupeng,

I am using Alluxio 1.2.0 RC-2 but this error also happened when I use 1.1.0, 1.1.1


<master.log>
<worker (1).log>
<task.log>
<user_root (1).log>

On Jul 27, 2016, at 10:25 PM, Yupeng Fu <[hidden email]> wrote:

Hi Chanh,

Which version are you using? It'll be helpful to see the logs from master and workers.

Thanks.

Yupeng

<a href="http://www.alluxio.com/" rel="nofollow" target="_blank" onmousedown="this.href='http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg';return true;" onclick="this.href='http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg';return true;" class="">Alluxio Inc

On Wed, Jul 27, 2016 at 7:45 AM, Chanh Le <[hidden email]> wrote:
Any luck on that?
It happened again whatever I tried.


On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn/" rel="nofollow" target="_blank" onmousedown="this.href='http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEsrSqJnPY42d_TnsNNFk9bjGeL7w';return true;" onclick="this.href='http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEsrSqJnPY42d_TnsNNFk9bjGeL7w';return true;" class="">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)



-- 
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alluxio-user...@googlegroups.com.
For more options, visit <a href="https://groups.google.com/d/optout" rel="nofollow" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;" class="">https://groups.google.com/d/optout.



--
You received this message because you are subscribed to a topic in the Google Groups "Alluxio Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/alluxio-users/Gdes4mCLc4U/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

alluxio-env.sh (1K) Download Attachment
alluxio-site.properties (3K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
In reply to this post by Chanh Le
Hi,
Any progress on that?
It is still happening to me.
Even I updated to latest version Lost data every-time.
Does Alluxio has REPLICA mode or something similar?

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 244.0 failed 4 times, most recent failure: Lost task 0.3 in stage 244.0 (TID 21414, slave1): java.io.IOException: Block 124537274368 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:114)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:108)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.spark.sql.execution.datasources.parquet.SpecificParquetRecordReaderBase.initialize(SpecificParquetRecordReaderBase.java:100)
at org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader.initialize(VectorizedParquetRecordReader.java:109)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:360)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$buildReader$1.apply(ParquetFileFormat.scala:339)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:116)
at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:91)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:784)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)





On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
In reply to this post by Chanh Le
This also happen when I am persisting a FOLDER

persisted file /FACT_AD_STATS_DAILY/time=2016-07-19/network_id=24608/part-r-00003-81ee10be-bcd8-4418-850e-3ce3da0b972c.snappy.parquet with size 3702740
persisted file /FACT_AD_STATS_DAILY/time=2016-07-19/network_id=24608/part-r-00001-81ee10be-bcd8-4418-850e-3ce3da0b972c.snappy.parquet with size 3692808
persisted file /FACT_AD_STATS_DAILY/time=2016-07-19/network_id=24608/part-r-00002-81ee10be-bcd8-4418-850e-3ce3da0b972c.snappy.parquet with size 3650215
persisted file /FACT_AD_STATS_DAILY/time=2016-07-19/network_id=24608/part-r-00000-81ee10be-bcd8-4418-850e-3ce3da0b972c.snappy.parquet with size 3657195
Block 195488120832 is not available in Alluxio
Block 195504898048 is not available in Alluxio
Block 196914184192 is not available in Alluxio
Block 192417890304 is not available in Alluxio
Block 192434667520 is not available in Alluxio
Block 193357414400 is not available in Alluxio
Block 193374191616 is not available in Alluxio
Block 193340637184 is not available in Alluxio
Block 193055424512 is not available in Alluxio
Block 193088978944 is not available in Alluxio
Block 193105756160 is not available in Alluxio
You have new mail in /var/spool/mail/root







On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
In reply to this post by Chanh Le
Continue to debug.

Is it relates to Alluxio cluster without using underfs system like GlusterFs or HDFS because right now I only use Alluxio and underFs is a the same folder in every node in cluster?


 

On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Gene Pang
Hi,

I see. Thanks for the info. How many machines are you running on? With Alluxio, all masters and workers should have access to the all UnderFileSystems. This is because both masters and workers need to access them. Therefore, if you are running on multiple nodes, the local file system UFS will not work correctly.

How are you trying to use the UnderFileSystem?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
Hi Gene,

On Aug 2, 2016, at 8:56 PM, Gene Pang <[hidden email]> wrote:

Hi,

I see. Thanks for the info. How many machines are you running on?

There are 5 machines.

With Alluxio, all masters and workers should have access to the all UnderFileSystems. This is because both masters and workers need to access them. Therefore, if you are running on multiple nodes, the local file system UFS will not work correctly.

How are you trying to use the UnderFileSystem?


Thank for the info. I am trying underFs with HDFS.

Regards,
Chanh




Thanks,
Gene

--
You received this message because you are subscribed to a topic in the Google Groups "Alluxio Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/alluxio-users/Gdes4mCLc4U/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.



--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Gene Pang
Ok, glad we could help!

-Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
One more thing is why when I ran the tests it passed all.

[root@master1:/home/spark/alluxio-1.2.0]# ./bin/alluxio runTests
2016-08-03 15:28:09,461 INFO  type (AbstractClient.java:connect) - Alluxio client (version 1.2.0) is trying to connect with FileSystemMasterClient master @ master2/10.197.0.4:19998
2016-08-03 15:28:09,474 INFO  type (AbstractClient.java:connect) - Client registered with FileSystemMasterClient master @ master2/10.197.0.4:19998
runTest Basic CACHE_PROMOTE MUST_CACHE
2016-08-03 15:28:09,536 INFO  type (AbstractClient.java:connect) - Alluxio client (version 1.2.0) is trying to connect with BlockMasterClient master @ master2/10.197.0.4:19998
2016-08-03 15:28:09,542 INFO  type (AbstractClient.java:connect) - Client registered with BlockMasterClient master @ master2/10.197.0.4:19998
2016-08-03 15:28:09,666 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,726 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:09,797 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:09,818 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_PROMOTE_MUST_CACHE took 301 ms.
2016-08-03 15:28:09,846 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,857 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,871 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,875 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:09,888 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261439356928 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:09,896 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:09,904 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:09,925 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,439,356,928, sessionId: 6,150,346,585,558,616,747, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
2016-08-03 15:28:09,927 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_PROMOTE_MUST_CACHE took 109 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE_PROMOTE MUST_CACHE
2016-08-03 15:28:09,931 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,935 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:09,944 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:09,947 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,953 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,956 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:09,958 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:09,965 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261456134144 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:09,969 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:09,976 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:09,976 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,456,134,144, sessionId: 7,779,249,575,726,637,822, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
Passed the test!
runTest Basic CACHE_PROMOTE CACHE_THROUGH
2016-08-03 15:28:10,021 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,028 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,036 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,038 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_PROMOTE_CACHE_THROUGH took 61 ms.
2016-08-03 15:28:10,039 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,045 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,047 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,050 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,061 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261472911360 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,063 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,065 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,065 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,472,911,360, sessionId: 7,676,144,320,530,764,352, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
2016-08-03 15:28:10,065 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_PROMOTE_CACHE_THROUGH took 27 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE_PROMOTE CACHE_THROUGH
2016-08-03 15:28:10,070 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,073 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,075 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,079 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,084 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,116 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,118 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,120 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261489688576 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,122 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,123 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,124 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,489,688,576, sessionId: 4,491,307,355,326,078,767, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
Passed the test!
runTest Basic CACHE_PROMOTE THROUGH
2016-08-03 15:28:10,127 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_PROMOTE_THROUGH took 2 ms.
2016-08-03 15:28:10,131 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,132 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,134 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,135 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_PROMOTE_THROUGH took 8 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE_PROMOTE THROUGH
2016-08-03 15:28:10,141 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,143 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,144 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
Passed the test!
runTest Basic CACHE_PROMOTE ASYNC_THROUGH
2016-08-03 15:28:10,148 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,149 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,151 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,166 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_PROMOTE_ASYNC_THROUGH took 19 ms.
2016-08-03 15:28:10,167 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,171 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,173 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,176 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,177 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261540020224 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,179 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,180 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,180 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,540,020,224, sessionId: 3,107,757,617,898,721,315, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
2016-08-03 15:28:10,181 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_PROMOTE_ASYNC_THROUGH took 15 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE_PROMOTE ASYNC_THROUGH
2016-08-03 15:28:10,184 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,186 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,187 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,190 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,193 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,195 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,197 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,198 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261556797440 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,199 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,201 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,201 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,556,797,440, sessionId: 1,657,979,994,829,799,935, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
Passed the test!
runTest Basic CACHE MUST_CACHE
2016-08-03 15:28:10,203 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,205 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,206 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,208 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_MUST_CACHE took 6 ms.
2016-08-03 15:28:10,211 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,214 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,215 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,216 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261573574656 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,218 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,219 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,219 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,573,574,656, sessionId: 2,883,636,709,066,679,720, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
2016-08-03 15:28:10,219 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_MUST_CACHE took 11 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE MUST_CACHE
2016-08-03 15:28:10,222 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,223 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,225 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,229 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,231 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,232 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,233 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261590351872 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,234 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,236 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,236 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,590,351,872, sessionId: 6,407,935,829,386,447,597, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
Passed the test!
runTest Basic CACHE CACHE_THROUGH
2016-08-03 15:28:10,239 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,241 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,242 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,244 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_CACHE_THROUGH took 7 ms.
2016-08-03 15:28:10,247 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,249 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,250 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,251 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261607129088 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,252 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,254 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,254 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,607,129,088, sessionId: 8,655,934,243,785,086,452, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
2016-08-03 15:28:10,254 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_CACHE_THROUGH took 10 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE CACHE_THROUGH
2016-08-03 15:28:10,258 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,260 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,261 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,265 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,268 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,270 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,271 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261623906304 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,272 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,274 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,274 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,623,906,304, sessionId: 1,736,702,224,076,929,203, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
Passed the test!
runTest Basic CACHE THROUGH
2016-08-03 15:28:10,277 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_THROUGH took 2 ms.
2016-08-03 15:28:10,279 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,280 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,281 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,284 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_THROUGH took 7 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE THROUGH
2016-08-03 15:28:10,289 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,290 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,292 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
Passed the test!
runTest Basic CACHE ASYNC_THROUGH
2016-08-03 15:28:10,295 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,296 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,297 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,299 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_CACHE_ASYNC_THROUGH took 6 ms.
2016-08-03 15:28:10,301 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,303 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,305 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,306 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261674237952 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,307 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,309 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,309 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,674,237,952, sessionId: 3,849,958,977,120,000,373, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
2016-08-03 15:28:10,309 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_CACHE_ASYNC_THROUGH took 10 ms.
Passed the test!
runTest BasicNonByteBuffer CACHE ASYNC_THROUGH
2016-08-03 15:28:10,312 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,313 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,314 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,319 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,324 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,326 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,327 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261691015168 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,329 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,330 INFO  type (NettyRemoteBlockWriter.java:write) - status: WRITE_ERROR from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,331 INFO  type (FileInStream.java:closeOrCancelCacheStream) - Closing or cancelling the cache stream encountered IOExecption java.io.IOException: Error writing blockId: 261,691,015,168, sessionId: 5,919,856,110,590,208,723, address: master4/10.197.0.6:29999, message: Failed to write block., reading from the regular stream won't be affected.
Passed the test!
runTest Basic NO_CACHE MUST_CACHE
2016-08-03 15:28:10,333 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,335 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,336 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,338 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_NO_CACHE_MUST_CACHE took 6 ms.
2016-08-03 15:28:10,340 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,343 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,344 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261707792384 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,345 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_NO_CACHE_MUST_CACHE took 7 ms.
Passed the test!
runTest BasicNonByteBuffer NO_CACHE MUST_CACHE
2016-08-03 15:28:10,348 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,349 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,351 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,372 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,374 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,375 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261724569600 from remote machine master4/10.197.0.6:29999 received
Passed the test!
runTest Basic NO_CACHE CACHE_THROUGH
2016-08-03 15:28:10,382 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,383 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,384 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,386 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_NO_CACHE_CACHE_THROUGH took 9 ms.
2016-08-03 15:28:10,388 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,390 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,391 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261741346816 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,392 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_NO_CACHE_CACHE_THROUGH took 6 ms.
Passed the test!
runTest BasicNonByteBuffer NO_CACHE CACHE_THROUGH
2016-08-03 15:28:10,395 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,398 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,399 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,403 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,406 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,407 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261758124032 from remote machine master4/10.197.0.6:29999 received
Passed the test!
runTest Basic NO_CACHE THROUGH
2016-08-03 15:28:10,411 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_NO_CACHE_THROUGH took 3 ms.
2016-08-03 15:28:10,412 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_NO_CACHE_THROUGH took 1 ms.
Passed the test!
runTest BasicNonByteBuffer NO_CACHE THROUGH
Passed the test!
runTest Basic NO_CACHE ASYNC_THROUGH
2016-08-03 15:28:10,420 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,422 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,423 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,425 INFO  type (BasicOperations.java:writeFile) - writeFile to file /default_tests_files/Basic_NO_CACHE_ASYNC_THROUGH took 7 ms.
2016-08-03 15:28:10,427 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,429 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,431 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261808455680 from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,431 INFO  type (BasicOperations.java:readFile) - readFile file /default_tests_files/Basic_NO_CACHE_ASYNC_THROUGH took 6 ms.
Passed the test!
runTest BasicNonByteBuffer NO_CACHE ASYNC_THROUGH
2016-08-03 15:28:10,434 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,436 INFO  type (NettyRemoteBlockWriter.java:write) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,438 INFO  type (NettyRemoteBlockWriter.java:write) - status: SUCCESS from remote machine master4/10.197.0.6:29999 received
2016-08-03 15:28:10,443 INFO  type (BlockWorkerClient.java:connectOperation) - Connecting to remote worker @ master4/10.197.0.6:29998
2016-08-03 15:28:10,445 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Connected to remote machine master4/10.197.0.6:29999
2016-08-03 15:28:10,446 INFO  type (NettyRemoteBlockReader.java:readRemoteBlock) - Data 261825232896 from remote machine master4/10.197.0.6:29999 received
Passed the test!






On Tuesday, August 2, 2016 at 11:27:12 PM UTC+7, Gene Pang wrote:
Ok, glad we could help!

-Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Gene Pang
Since the basic tests only deal with a small amount of data, it is unlikely that any eviction is triggered.

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
In reply to this post by Chanh Le
Hi Gene,
It's still happening even I am using Hadoop as underFs now.
I attached the logs here .
Please help check.

On Wednesday, July 20, 2016 at 12:06:36 PM UTC+7, Chanh Le wrote:
Hi everyone,
How to track what is going on with this it takes my spark jobs down.
Bock is not available mean it's lost? How did I know what file or folder is?

Thanks.

16/07/20 12:03:15 WARN TaskSetManager: Lost task 67.0 in stage 58.0 (TID 12713, <a href="http://slave5.dev-etl.ants.vn" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fslave5.dev-etl.ants.vn\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHzrUiZCX4qBaPJgI807od04mdy-g&#39;;return true;">slave5.dev-etl.ants.vn): java.io.IOException: Block 5922357248 is not available in Alluxio
at alluxio.client.block.AlluxioBlockStore.getInStream(AlluxioBlockStore.java:115)
at alluxio.client.file.FileInStream.updateBlockInStream(FileInStream.java:508)
at alluxio.client.file.FileInStream.updateStreams(FileInStream.java:415)
at alluxio.client.file.FileInStream.close(FileInStream.java:147)
at alluxio.hadoop.HdfsFileInputStream.close(HdfsFileInputStream.java:115)
at java.io.FilterInputStream.close(FilterInputStream.java:181)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:432)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:385)
at org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:157)
at org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:140)
at org.apache.spark.rdd.SqlNewHadoopRDD$$anon$1.<init>(SqlNewHadoopRDD.scala:180)
at org.apache.spark.rdd.SqlNewHadoopRDD.compute(SqlNewHadoopRDD.scala:126)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Gene Pang
Hi,

Is it possible that the block is simply being evicted? If your write mode is "MUST_CACHE", it is possible that blocks can be evicted without being persisted to UFS.

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Gene Pang
Hi Chanh,

Were you able to resolve your issue?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: java.io.IOException: Block 5922357248 is not available in Alluxio

Chanh Le
Hi Gene,

After I used Hadoop as a underFS. This problem have gone now.
Thanks for your support.

Regards,
Chanh


On Aug 11, 2016, at 9:05 PM, Gene Pang <[hidden email]> wrote:

Hi Chanh,

Were you able to resolve your issue?

Thanks,
Gene

--
You received this message because you are subscribed to a topic in the Google Groups "Alluxio Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/alluxio-users/Gdes4mCLc4U/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
12