MapReduce Error

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

MapReduce Error

shuyunkun
Hi,

Alluxio: 1.2.0 against 2.6.0.
Hadoop: 2.6.0.
Computation Framework: MapReduce.
OS: Centos 6
JDK: 1.7.0

I followed the instructions of running MapReduce on with Alluxio:
1. Alreay add 3 properties to both hdfs-site.xml and core-site.xml, as show belowing








2. yarn jar test.jar com.test.IOReadTest -D alluxio.security.authentication.type=SIMPLE -libjars alluxio-assemblies-1.2.0-jar-with-dependencies.jar alluxio://172.19.121.51:19998/data alluxio://172.19.121.51:19998/test &

3.Then I got the following errors:

16/08/02 09:56:13 ERROR input.InputPathFilter: Error when call isFile
java.lang.IllegalArgumentException: Wrong FS: alluxio://172.19.121.51:19998/data/01, expected: hdfs://hstore
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:193)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:105)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1136)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1132)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1132)
at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1449)
at com.antfact.hstore.batch.mr.input.InputPathFilter.accept(InputPathFilter.java:57)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.callInternal(OrcInputFormat.java:1015)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.access$1300(OrcInputFormat.java:965)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:990)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:987)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:987)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:965)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

Do I missing any steps, Thanks.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: MapReduce Error

Gene Pang
Hi,

What is your program doing?

It looks like it is trying to use hive? These other topics might be helpful:

https://groups.google.com/d/topic/alluxio-users/X5XK7EPLG0g/discussion
https://groups.google.com/d/topic/alluxio-users/9rrNBtywRbY/discussion

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: MapReduce Error

Gene Pang
Hi,

Were you able to resolve your issue?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: MapReduce Error

shuyunkun
Hi,

After added two params "fs.default.name" and "alluxio.security.authentication.type", the error mentioned above is gone. But I got new exceptions as show below:

java.lang.UnsupportedOperationException: Only supported for DFS; got class alluxio.hadoop.FileSystem
at org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813)
at org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784)
at org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:481)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.callInternal(OrcInputFormat.java:999)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.access$1300(OrcInputFormat.java:965)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:990)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:987)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:987)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:965)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

16/08/09 17:12:32 INFO logger.type: create(/user/hduser/.staging/job_1470024122519_4394/job.split, rw-r--r--, true, 131072, 1, 268435456, null)
16/08/09 17:12:32 INFO logger.type: setMode(/user/hduser/.staging/job_1470024122519_4394/job.split,rw-r--r--)
16/08/09 17:12:32 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/hduser/.staging/job_1470024122519_4394
16/08/09 17:12:32 INFO logger.type: delete(/user/hduser/.staging/job_1470024122519_4394, true)
Exception in thread "main" java.io.IOException: alluxio.exception.AccessControlException: Could not setMode for UFS file hdfs://hstore/user/hduser/.staging/job_1470024122519_4394/job.split
at alluxio.hadoop.AbstractFileSystem.setPermission(AbstractFileSystem.java:355)
at alluxio.hadoop.FileSystem.setPermission(FileSystem.java:25)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:578)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createFile(JobSplitWriter.java:101)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:77)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at com.antfact.laundry.shuqi.job.IOReadTest.run(IOReadTest.java:72)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.antfact.laundry.shuqi.job.IOReadTest.main(IOReadTest.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: alluxio.exception.AccessControlException: Could not setMode for UFS file hdfs://hstore/user/hduser/.staging/job_1470024122519_4394/job.split
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at alluxio.exception.AlluxioException.fromThrift(AlluxioException.java:99)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:326)
at alluxio.client.file.FileSystemMasterClient.setAttribute(FileSystemMasterClient.java:299)
at alluxio.client.file.BaseFileSystem.setAttribute(BaseFileSystem.java:298)
at alluxio.hadoop.AbstractFileSystem.setPermission(AbstractFileSystem.java:353)
... 23 more

16/08/09 17:12:32 ERROR hdfs.DFSClient: Failed to close inode 21988260
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/hduser/.staging/job_1470024122519_4394/job.split.alluxio.0x55E733E402DD5CCF.tmp (inode 21988260): File does not exist. Holder DFSClient_NONMAPREDUCE_1340077265_1 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3798)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3886)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3856)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:528)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1469)
at org.apache.hadoop.ipc.Client.call(Client.java:1400)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy28.complete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy29.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:917)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2710)
at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2727)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

BTW, the file format is ORC and hadoop version is HDP.

Thanks.

在 2016年8月5日星期五 UTC+8下午10:01:02,Gene Pang写道:
Hi,

Were you able to resolve your issue?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: MapReduce Error

Yupeng Fu
Hi,

Which particular version of Hive are you using?

Thanks,


On Tue, Aug 9, 2016 at 2:19 AM, <[hidden email]> wrote:
Hi,

After added two params "fs.default.name" and "alluxio.security.authentication.type", the error mentioned above is gone. But I got new exceptions as show below:

java.lang.UnsupportedOperationException: Only supported for DFS; got class alluxio.hadoop.FileSystem
at org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813)
at org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784)
at org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:481)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.callInternal(OrcInputFormat.java:999)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.access$1300(OrcInputFormat.java:965)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:990)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:987)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:987)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:965)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

16/08/09 17:12:32 INFO logger.type: create(/user/hduser/.staging/job_1470024122519_4394/job.split, rw-r--r--, true, 131072, 1, 268435456, null)
16/08/09 17:12:32 INFO logger.type: setMode(/user/hduser/.staging/job_1470024122519_4394/job.split,rw-r--r--)
16/08/09 17:12:32 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/hduser/.staging/job_1470024122519_4394
16/08/09 17:12:32 INFO logger.type: delete(/user/hduser/.staging/job_1470024122519_4394, true)
Exception in thread "main" java.io.IOException: alluxio.exception.AccessControlException: Could not setMode for UFS file hdfs://hstore/user/hduser/.staging/job_1470024122519_4394/job.split
at alluxio.hadoop.AbstractFileSystem.setPermission(AbstractFileSystem.java:355)
at alluxio.hadoop.FileSystem.setPermission(FileSystem.java:25)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:578)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createFile(JobSplitWriter.java:101)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:77)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at com.antfact.laundry.shuqi.job.IOReadTest.run(IOReadTest.java:72)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.antfact.laundry.shuqi.job.IOReadTest.main(IOReadTest.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: alluxio.exception.AccessControlException: Could not setMode for UFS file hdfs://hstore/user/hduser/.staging/job_1470024122519_4394/job.split
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at alluxio.exception.AlluxioException.fromThrift(AlluxioException.java:99)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:326)
at alluxio.client.file.FileSystemMasterClient.setAttribute(FileSystemMasterClient.java:299)
at alluxio.client.file.BaseFileSystem.setAttribute(BaseFileSystem.java:298)
at alluxio.hadoop.AbstractFileSystem.setPermission(AbstractFileSystem.java:353)
... 23 more

16/08/09 17:12:32 ERROR hdfs.DFSClient: Failed to close inode 21988260
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/hduser/.staging/job_1470024122519_4394/job.split.alluxio.0x55E733E402DD5CCF.tmp (inode 21988260): File does not exist. Holder DFSClient_NONMAPREDUCE_1340077265_1 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3798)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3886)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3856)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:528)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1469)
at org.apache.hadoop.ipc.Client.call(Client.java:1400)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy28.complete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy29.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:917)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2710)
at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2727)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

BTW, the file format is ORC and hadoop version is HDP.

Thanks.

在 2016年8月5日星期五 UTC+8下午10:01:02,Gene Pang写道:
Hi,

Were you able to resolve your issue?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: MapReduce Error

shuyunkun
Hi,

I didn't directly use hive but just the orc, the version of hive is 2.0.1.

Thanks.

在 2016年8月9日星期二 UTC+8下午10:07:39,Yupeng Fu写道:
Hi,

Which particular version of Hive are you using?

Thanks,

Yupeng

<a href="http://www.alluxio.com/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fwww.alluxio.com%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEO-aqdHfZyi6Oxg9lUcWW5v5b4zg&#39;;return true;">Alluxio Inc

On Tue, Aug 9, 2016 at 2:19 AM, <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="Mj0RP33HCAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">shuy...@...> wrote:
Hi,

After added two params "<a href="http://fs.default.name" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Ffs.default.name\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFLGTK29rZEUMeVGojX5gzRP9xJtg&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Ffs.default.name\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNFLGTK29rZEUMeVGojX5gzRP9xJtg&#39;;return true;">fs.default.name" and "alluxio.security.authentication.type", the error mentioned above is gone. But I got new exceptions as show below:

java.lang.UnsupportedOperationException: Only supported for DFS; got class alluxio.hadoop.FileSystem
at org.apache.hadoop.hive.shims.Hadoop23Shims.ensureDfs(Hadoop23Shims.java:813)
at org.apache.hadoop.hive.shims.Hadoop23Shims.listLocatedHdfsStatus(Hadoop23Shims.java:784)
at <a href="http://org.apache.hadoop.hive.ql.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;">org.apache.hadoop.hive.ql.io.AcidUtils.getAcidState(AcidUtils.java:481)
at <a href="http://org.apache.hadoop.hive.ql.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;">org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.callInternal(OrcInputFormat.java:999)
at <a href="http://org.apache.hadoop.hive.ql.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;">org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.access$1300(OrcInputFormat.java:965)
at <a href="http://org.apache.hadoop.hive.ql.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;">org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:990)
at <a href="http://org.apache.hadoop.hive.ql.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;">org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator$1.run(OrcInputFormat.java:987)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at <a href="http://org.apache.hadoop.hive.ql.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;">org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:987)
at <a href="http://org.apache.hadoop.hive.ql.io" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Forg.apache.hadoop.hive.ql.io\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHcaUWDwHKKesk2U6bZnzmAA8Nqiw&#39;;return true;">org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:965)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

16/08/09 17:12:32 INFO logger.type: create(/user/hduser/.staging/job_1470024122519_4394/job.split, rw-r--r--, true, 131072, 1, 268435456, null)
16/08/09 17:12:32 INFO logger.type: setMode(/user/hduser/.staging/job_1470024122519_4394/job.split,rw-r--r--)
16/08/09 17:12:32 INFO mapreduce.JobSubmitter: Cleaning up the staging area /user/hduser/.staging/job_1470024122519_4394
16/08/09 17:12:32 INFO logger.type: delete(/user/hduser/.staging/job_1470024122519_4394, true)
Exception in thread "main" java.io.IOException: alluxio.exception.AccessControlException: Could not setMode for UFS file hdfs://hstore/user/hduser/.staging/job_1470024122519_4394/job.split
at alluxio.hadoop.AbstractFileSystem.setPermission(AbstractFileSystem.java:355)
at alluxio.hadoop.FileSystem.setPermission(FileSystem.java:25)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:578)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createFile(JobSplitWriter.java:101)
at org.apache.hadoop.mapreduce.split.JobSplitWriter.createSplitFiles(JobSplitWriter.java:77)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
at com.antfact.laundry.shuqi.job.IOReadTest.run(IOReadTest.java:72)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.antfact.laundry.shuqi.job.IOReadTest.main(IOReadTest.java:38)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: alluxio.exception.AccessControlException: Could not setMode for UFS file hdfs://hstore/user/hduser/.staging/job_1470024122519_4394/job.split
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at alluxio.exception.AlluxioException.fromThrift(AlluxioException.java:99)
at alluxio.AbstractClient.retryRPC(AbstractClient.java:326)
at alluxio.client.file.FileSystemMasterClient.setAttribute(FileSystemMasterClient.java:299)
at alluxio.client.file.BaseFileSystem.setAttribute(BaseFileSystem.java:298)
at alluxio.hadoop.AbstractFileSystem.setPermission(AbstractFileSystem.java:353)
... 23 more

16/08/09 17:12:32 ERROR hdfs.DFSClient: Failed to close inode 21988260
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/hduser/.staging/job_1470024122519_4394/job.split.alluxio.0x55E733E402DD5CCF.tmp (inode 21988260): File does not exist. Holder DFSClient_NONMAPREDUCE_1340077265_1 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3798)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3886)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3856)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:725)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:528)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1469)
at org.apache.hadoop.ipc.Client.call(Client.java:1400)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy28.complete(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:459)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy29.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2251)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2235)
at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)
at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:917)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2710)
at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2727)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

BTW, the file format is ORC and hadoop version is HDP.

Thanks.

在 2016年8月5日星期五 UTC+8下午10:01:02,Gene Pang写道:
Hi,

Were you able to resolve your issue?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdf-obfuscated-mailto="Mj0RP33HCAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">alluxio-user...@googlegroups.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;" onclick="this.href=&#39;https://groups.google.com/d/optout&#39;;return true;">https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: MapReduce Error

Gene Pang
Hi,

What are you running to get this error? Are you just reading ORC files, or are you running Hive queries?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: MapReduce Error

Gene Pang
Hi,

Were you able to resolve your issue?

Thanks,
Gene

--
You received this message because you are subscribed to the Google Groups "Alluxio Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.