alluxio on spark

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

alluxio on spark

Kumar Gadamsetty
When try to read files on alluxio, it is failing to connect to alluxio filesystem. Followed the steps in the documentation, not sure if I'm missing something.

val input= "alluxio://localhost:19998/test.txt"
 
val df=spark.read.format("csv").option("header","true").load(input)
18/10/24 19:54:12 WARN FileStreamSink: Error while looking for metadata directory.
18/10/24 19:54:13 WARN FileStreamSink: Error while looking for metadata directory.
[Stage 0:>                                                          (0 + 1) / 1]18/10/24 19:59:14 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-10-56-254-54.ec2.internal, executor 1): alluxio.exception.status.UnavailableException: Failed to connect to FileSystemMasterClient @ localhost/127.0.0.1:19998 after 100 attempts
        at alluxio.AbstractClient.connect(AbstractClient.java:217)
        at alluxio.hadoop.AbstractFileSystem.initializeInternal(AbstractFileSystem.java:493)
        at alluxio.hadoop.AbstractFileSystem.initialize(AbstractFileSystem.java:456)
        at alluxio.hadoop.FileSystem.initialize(FileSystem.java:27)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2859)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2896)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2878)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:392)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:84)
        at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:125)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

[Stage 0:>          

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: alluxio on spark

Lu Qiu
Hi Kumar,

Could you check if the Alluxio cluster is running in localhost? Are you able to access the Alluxio master web UI http://localhost:19999? 
When using jps, can you see the AlluxioMaster, AlluxioWorker processes?

Thanks,
Lu

On Wed, Oct 24, 2018 at 1:23 PM Kumar Gadamsetty <[hidden email]> wrote:
When try to read files on alluxio, it is failing to connect to alluxio filesystem. Followed the steps in the documentation, not sure if I'm missing something.

val input= "alluxio://localhost:19998/test.txt"
 
val df=spark.read.format("csv").option("header","true").load(input)
18/10/24 19:54:12 WARN FileStreamSink: Error while looking for metadata directory.
18/10/24 19:54:13 WARN FileStreamSink: Error while looking for metadata directory.
[Stage 0:>                                                          (0 + 1) / 1]18/10/24 19:59:14 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-10-56-254-54.ec2.internal, executor 1): alluxio.exception.status.UnavailableException: Failed to connect to FileSystemMasterClient @ localhost/127.0.0.1:19998 after 100 attempts
        at alluxio.AbstractClient.connect(AbstractClient.java:217)
        at alluxio.hadoop.AbstractFileSystem.initializeInternal(AbstractFileSystem.java:493)
        at alluxio.hadoop.AbstractFileSystem.initialize(AbstractFileSystem.java:456)
        at alluxio.hadoop.FileSystem.initialize(FileSystem.java:27)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2859)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2896)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2878)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:392)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:84)
        at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:125)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

[Stage 0:>          

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: alluxio on spark

Bin Fan
Hi Kumar,

can you replace "localhost" in the following statement in Spark to be the real IP of the sever your Alluxio master is running?

val input= "alluxio://localhost:19998/test.txt"

On Wed, Oct 24, 2018 at 1:38 PM Lu <[hidden email]> wrote:
Hi Kumar,

Could you check if the Alluxio cluster is running in localhost? Are you able to access the Alluxio master web UI http://localhost:19999? 
When using jps, can you see the AlluxioMaster, AlluxioWorker processes?

Thanks,
Lu

On Wed, Oct 24, 2018 at 1:23 PM Kumar Gadamsetty <[hidden email]> wrote:
When try to read files on alluxio, it is failing to connect to alluxio filesystem. Followed the steps in the documentation, not sure if I'm missing something.

val input= "alluxio://localhost:19998/test.txt"
 
val df=spark.read.format("csv").option("header","true").load(input)
18/10/24 19:54:12 WARN FileStreamSink: Error while looking for metadata directory.
18/10/24 19:54:13 WARN FileStreamSink: Error while looking for metadata directory.
[Stage 0:>                                                          (0 + 1) / 1]18/10/24 19:59:14 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-10-56-254-54.ec2.internal, executor 1): alluxio.exception.status.UnavailableException: Failed to connect to FileSystemMasterClient @ localhost/127.0.0.1:19998 after 100 attempts
        at alluxio.AbstractClient.connect(AbstractClient.java:217)
        at alluxio.hadoop.AbstractFileSystem.initializeInternal(AbstractFileSystem.java:493)
        at alluxio.hadoop.AbstractFileSystem.initialize(AbstractFileSystem.java:456)
        at alluxio.hadoop.FileSystem.initialize(FileSystem.java:27)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2859)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2896)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2878)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:392)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:84)
        at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:125)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

[Stage 0:>          

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: alluxio on spark

Kumar Gadamsetty
It worked after changing localhost to ip-address of the server.

Thanks Bin.

On Wednesday, October 24, 2018 at 8:52:53 PM UTC-4, Bin Fan wrote:
Hi Kumar,

can you replace "localhost" in the following statement in Spark to be the real IP of the sever your Alluxio master is running?

val input= "alluxio://localhost:19998/test.txt"

On Wed, Oct 24, 2018 at 1:38 PM Lu <<a onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;" href="javascript:" target="_blank" rel="nofollow" gdf-obfuscated-mailto="LrVyf4-TAAAJ">l...@...> wrote:
Hi Kumar,

Could you check if the Alluxio cluster is running in localhost? Are you able to access the Alluxio master web UI <a onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Flocalhost%3A19999\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFpi-oD2BWAntzzMmlN3u81jhV5kw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Flocalhost%3A19999\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFpi-oD2BWAntzzMmlN3u81jhV5kw&#39;;return true;" href="http://localhost:19999" target="_blank" rel="nofollow">http://localhost:19999? 
When using jps, can you see the AlluxioMaster, AlluxioWorker processes?

Thanks,
Lu

On Wed, Oct 24, 2018 at 1:23 PM Kumar Gadamsetty <<a onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;" href="javascript:" target="_blank" rel="nofollow" gdf-obfuscated-mailto="LrVyf4-TAAAJ">kumar.gad...@...> wrote:
When try to read files on alluxio, it is failing to connect to alluxio filesystem. Followed the steps in the documentation, not sure if I'm missing something.

val input= "alluxio://localhost:19998/test.txt"
 
val df=spark.read.format("csv").option("header","true").load(input)
18/10/24 19:54:12 WARN FileStreamSink: Error while looking for metadata directory.
18/10/24 19:54:13 WARN FileStreamSink: Error while looking for metadata directory.
[Stage 0:>                                                          (0 + 1) / 1]18/10/24 19:59:14 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, ip-10-56-254-54.ec2.internal, executor 1): alluxio.exception.status.UnavailableException: Failed to connect to FileSystemMasterClient @ localhost/<a onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2F127.0.0.1%3A19998\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNET2tIqY3dnEJ8PrDZT_KbH_DTxVw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2F127.0.0.1%3A19998\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNET2tIqY3dnEJ8PrDZT_KbH_DTxVw&#39;;return true;" href="http://127.0.0.1:19998" target="_blank" rel="nofollow">127.0.0.1:19998 after 100 attempts
        at alluxio.AbstractClient.connect(AbstractClient.java:217)
        at alluxio.hadoop.AbstractFileSystem.initializeInternal(AbstractFileSystem.java:493)
        at alluxio.hadoop.AbstractFileSystem.initialize(AbstractFileSystem.java:456)
        at alluxio.hadoop.FileSystem.initialize(FileSystem.java:27)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2859)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2896)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2878)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:392)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
        at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.initialize(LineRecordReader.java:84)
        at org.apache.spark.sql.execution.datasources.HadoopFileLinesReader.<init>(HadoopFileLinesReader.scala:46)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:127)
        at org.apache.spark.sql.execution.datasources.text.TextFileFormat$$anonfun$readToUnsafeMem$1.apply(TextFileFormat.scala:124)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:148)
        at org.apache.spark.sql.execution.datasources.FileFormat$$anon$1.apply(FileFormat.scala:132)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$<a onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2F1.org\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFX1spfZg7KyRa7xWCcM3qU-fkVdw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2F1.org\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFX1spfZg7KyRa7xWCcM3qU-fkVdw&#39;;return true;" href="http://1.org" target="_blank" rel="nofollow">1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:125)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:179)
        at org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:106)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:253)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:109)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

[Stage 0:>          

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;" href="javascript:" target="_blank" rel="nofollow" gdf-obfuscated-mailto="LrVyf4-TAAAJ">alluxio-user...@googlegroups.com.
For more options, visit <a onmousedown="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" href="https://groups.google.com/d/optout" target="_blank" rel="nofollow">https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;" href="javascript:" target="_blank" rel="nofollow" gdf-obfuscated-mailto="LrVyf4-TAAAJ">alluxio-user...@googlegroups.com.
For more options, visit <a onmousedown="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" href="https://groups.google.com/d/optout" target="_blank" rel="nofollow">https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: alluxio on spark

Gene Pang
Thanks for the confirmation!

-Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.