We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'yarn app -enableFastLaunch''] {'timeout': 60} 2024-11-24 21:34:25,300 - call returned (56, 'SLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:/usr/odp/1.2.4.0-55/hadoop/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:/usr/odp/1.2.4.0-55/hadoop-mapreduce/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]\n24/11/24 21:34:19 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://m1.bigdata.local:8198/ws/v2/timeline/, clusterId=yarn_cluster\n24/11/24 21:34:19 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at m1.bigdata.local/192.168.1.53:8050\n24/11/24 21:34:19 INFO client.AHSProxy: Connecting to Application History server at m1.bigdata.local/192.168.1.53:10200\n24/11/24 21:34:19 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://m1.bigdata.local:8198/ws/v2/timeline/, clusterId=yarn_cluster\n24/11/24 21:34:19 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at m1.bigdata.local/192.168.1.53:8050\n24/11/24 21:34:19 INFO client.AHSProxy: Connecting to Application History server at m1.bigdata.local/192.168.1.53:10200\n24/11/24 21:34:19 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://m1.bigdata.local:8198/ws/v2/timeline/, clusterId=yarn_cluster\n24/11/24 21:34:19 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at m1.bigdata.local/192.168.1.53:8050\n24/11/24 21:34:19 INFO client.AHSProxy: Connecting to Application History server at m1.bigdata.local/192.168.1.53:10200\n24/11/24 21:34:19 INFO client.ServiceClient: Running command as user yarn\n24/11/24 21:34:20 INFO utils.ServiceUtils: Tar-gzipping folders [/usr/odp/1.2.4.0-55/hadoop-yarn/./, /usr/odp/1.2.4.0-55/hadoop-yarn/lib, /usr/odp/1.2.4.0-55/hadoop-hdfs/./, /usr/odp/1.2.4.0-55/hadoop-hdfs/lib, /usr/odp/1.2.4.0-55/hadoop/./, /usr/odp/1.2.4.0-55/hadoop/lib] to /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/service-dep_17654146514488207529.tar.gz\n24/11/24 21:34:24 INFO utils.CoreFileSystem: Copying file file:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/service-dep_17654146514488207529.tar.gz to /odp/apps/1.2.4.0-55/yarn/service-dep.tar.gz\n24/11/24 21:34:24 ERROR client.ServiceClient: Got exception creating tarball and uploading to HDFS\norg.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/odp/apps/1.2.4.0-55/yarn":yarn:hadoop:dr-xr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:346)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1943)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1927)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1886)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2685)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)\n\tat java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat java.base/javax.security.auth.Subject.doAs(Subject.java:423)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)\n\n\tat java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n\tat java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)\n\tat org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)\n\tat org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)\n\tat org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:286)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1271)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1250)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1232)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1170)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:569)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:566)\n\tat org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:580)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:507)\n\tat org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1233)\n\tat org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1210)\n\tat org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1091)\n\tat org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:489)\n\tat org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:430)\n\tat org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2592)\n\tat org.apache.hadoop.yarn.service.utils.CoreFileSystem.copyLocalFileToHdfs(CoreFileSystem.java:508)\n\tat org.apache.hadoop.yarn.service.client.ServiceClient.actionDependency(ServiceClient.java:1683)\n\tat org.apache.hadoop.yarn.service.client.ServiceClient.enableFastLaunch(ServiceClient.java:1644)\n\tat org.apache.hadoop.yarn.service.client.ApiServiceClient.enableFastLaunch(ApiServiceClient.java:519)\n\tat org.apache.hadoop.yarn.client.cli.ApplicationCLI.executeEnableFastLaunchCommand(ApplicationCLI.java:1357)\n\tat org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:210)\n\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)\n\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97)\n\tat org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:128)\nCaused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=yarn, access=WRITE, inode="/odp/apps/1.2.4.0-55/yarn":yarn:hadoop:dr-xr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:346)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1943)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1927)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1886)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2685)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)\n\tat java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat java.base/javax.security.auth.Subject.doAs(Subject.java:423)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)\n\n\tat org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)\n\tat org.apache.hadoop.ipc.Client.call(Client.java:1513)\n\tat org.apache.hadoop.ipc.Client.call(Client.java:1410)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)\n\tat com.sun.proxy.$Proxy25.create(Unknown Source)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:383)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:566)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)\n\tat com.sun.proxy.$Proxy26.create(Unknown Source)\n\tat org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:280)\n\t... 24 more') 2024-11-24 21:34:25,300 - Failed to Enable Yarn FastLaunch
The text was updated successfully, but these errors were encountered:
fix permissions error on YARN ATS start fixes #101
0a671c4
follow up on fix permissions error on YARN ATS start fixes #101
7aa908c
No branches or pull requests
hi
I am using Ambari version 2.7.11-71 to set up a cluster on the rocky Linux 9 operating system. odp BUILDS/1.2.4.0-55
yum info ambari-server
Last metadata expiration check: 1:06:06 ago on Sun 24 Nov 2024 08:42:38 PM +0330.
Installed Packages
Name : ambari-server
Version : 2.7.11.0
Release : 71
Architecture : x86_64
Size : 617 M
Source : ambari-server-2.7.11.0-71.src.rpm
Repository : @System
From repo : ambari
Summary : Ambari Server
URL : https://www.apache.org
License : 2012, Apache Software Foundation
Description : Maven Recipe: RPM Package.
but Timeline Service V1.5 and resource manager can not start . my cluster not kerberos .
my log on Linux /var/lib/ambari-agent/data/output-73.txt
call['ambari-sudo.sh su yarn -l -s /bin/bash -c 'yarn app -enableFastLaunch''] {'timeout': 60}
2024-11-24 21:34:25,300 - call returned (56, 'SLF4J: Class path contains multiple SLF4J bindings.\nSLF4J: Found binding in [jar:file:/usr/odp/1.2.4.0-55/hadoop/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: Found binding in [jar:file:/usr/odp/1.2.4.0-55/hadoop-mapreduce/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]\nSLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.\nSLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]\n24/11/24 21:34:19 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://m1.bigdata.local:8198/ws/v2/timeline/, clusterId=yarn_cluster\n24/11/24 21:34:19 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at m1.bigdata.local/192.168.1.53:8050\n24/11/24 21:34:19 INFO client.AHSProxy: Connecting to Application History server at m1.bigdata.local/192.168.1.53:10200\n24/11/24 21:34:19 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://m1.bigdata.local:8198/ws/v2/timeline/, clusterId=yarn_cluster\n24/11/24 21:34:19 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at m1.bigdata.local/192.168.1.53:8050\n24/11/24 21:34:19 INFO client.AHSProxy: Connecting to Application History server at m1.bigdata.local/192.168.1.53:10200\n24/11/24 21:34:19 INFO impl.TimelineReaderClientImpl: Initialized TimelineReader URI=http://m1.bigdata.local:8198/ws/v2/timeline/, clusterId=yarn_cluster\n24/11/24 21:34:19 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at m1.bigdata.local/192.168.1.53:8050\n24/11/24 21:34:19 INFO client.AHSProxy: Connecting to Application History server at m1.bigdata.local/192.168.1.53:10200\n24/11/24 21:34:19 INFO client.ServiceClient: Running command as user yarn\n24/11/24 21:34:20 INFO utils.ServiceUtils: Tar-gzipping folders [/usr/odp/1.2.4.0-55/hadoop-yarn/./, /usr/odp/1.2.4.0-55/hadoop-yarn/lib, /usr/odp/1.2.4.0-55/hadoop-hdfs/./, /usr/odp/1.2.4.0-55/hadoop-hdfs/lib, /usr/odp/1.2.4.0-55/hadoop/./, /usr/odp/1.2.4.0-55/hadoop/lib] to /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/service-dep_17654146514488207529.tar.gz\n24/11/24 21:34:24 INFO utils.CoreFileSystem: Copying file file:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/service-dep_17654146514488207529.tar.gz to /odp/apps/1.2.4.0-55/yarn/service-dep.tar.gz\n24/11/24 21:34:24 ERROR client.ServiceClient: Got exception creating tarball and uploading to HDFS\norg.apache.hadoop.security.AccessControlException: Permission denied: user=yarn, access=WRITE, inode="/odp/apps/1.2.4.0-55/yarn":yarn:hadoop:dr-xr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:346)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1943)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1927)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1886)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2685)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)\n\tat java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat java.base/javax.security.auth.Subject.doAs(Subject.java:423)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)\n\n\tat java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n\tat java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)\n\tat org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)\n\tat org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)\n\tat org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:286)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1271)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1250)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1232)\n\tat org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1170)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:569)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:566)\n\tat org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:580)\n\tat org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:507)\n\tat org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1233)\n\tat org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1210)\n\tat org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1091)\n\tat org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:489)\n\tat org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:430)\n\tat org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2592)\n\tat org.apache.hadoop.yarn.service.utils.CoreFileSystem.copyLocalFileToHdfs(CoreFileSystem.java:508)\n\tat org.apache.hadoop.yarn.service.client.ServiceClient.actionDependency(ServiceClient.java:1683)\n\tat org.apache.hadoop.yarn.service.client.ServiceClient.enableFastLaunch(ServiceClient.java:1644)\n\tat org.apache.hadoop.yarn.service.client.ApiServiceClient.enableFastLaunch(ApiServiceClient.java:519)\n\tat org.apache.hadoop.yarn.client.cli.ApplicationCLI.executeEnableFastLaunchCommand(ApplicationCLI.java:1357)\n\tat org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:210)\n\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)\n\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:97)\n\tat org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:128)\nCaused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=yarn, access=WRITE, inode="/odp/apps/1.2.4.0-55/yarn":yarn:hadoop:dr-xr-xr-x\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:506)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:346)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1943)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1927)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1886)\n\tat org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.resolvePathForStartFile(FSDirWriteFileOp.java:323)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2685)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)\n\tat org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)\n\tat java.base/java.security.AccessController.doPrivileged(Native Method)\n\tat java.base/javax.security.auth.Subject.doAs(Subject.java:423)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)\n\n\tat org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)\n\tat org.apache.hadoop.ipc.Client.call(Client.java:1513)\n\tat org.apache.hadoop.ipc.Client.call(Client.java:1410)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)\n\tat com.sun.proxy.$Proxy25.create(Unknown Source)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:383)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.base/java.lang.reflect.Method.invoke(Method.java:566)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)\n\tat org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)\n\tat com.sun.proxy.$Proxy26.create(Unknown Source)\n\tat org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:280)\n\t... 24 more')
2024-11-24 21:34:25,300 - Failed to Enable Yarn FastLaunch
The text was updated successfully, but these errors were encountered: