Hadoop配置Kerberos认证(2.7.1) 2015-11-17 21:45

安装

创建用户

在kerberos数据库中增加hadoop、HTTP两个用户。

各个Hadoop组件都使用这两个用户访问kerberos。

kadmin.local -q "addprinc -randkey hadoop/ctrl@CHEYO.NET"
kadmin.local -q "addprinc -randkey hadoop/data01@CHEYO.NET"
kadmin.local -q "addprinc -randkey hadoop/data02@CHEYO.NET"
kadmin.local -q "addprinc -randkey hadoop/data03@CHEYO.NET"
kadmin.local -q "addprinc -randkey HTTP/ctrl@CHEYO.NET"
kadmin.local -q "addprinc -randkey HTTP/data01@CHEYO.NET"
kadmin.local -q "addprinc -randkey HTTP/data02@CHEYO.NET"
kadmin.local -q "addprinc -randkey HTTP/data03@CHEYO.NET"

创建keytab

为hadoop和HTTP两个用户创建keytab:

cd /opt/hadoop/etc/
kadmin.local -q "xst -k hadoop.keytab hadoop/ctrl@CHEYO.NET"
kadmin.local -q "xst -k hadoop.keytab hadoop/data01@CHEYO.NET"
kadmin.local -q "xst -k hadoop.keytab hadoop/data02@CHEYO.NET"
kadmin.local -q "xst -k hadoop.keytab hadoop/data03@CHEYO.NET"
kadmin.local -q "xst -k hadoop.keytab HTTP/ctrl@CHEYO.NET"
kadmin.local -q "xst -k hadoop.keytab HTTP/data01@CHEYO.NET"
kadmin.local -q "xst -k hadoop.keytab HTTP/data02@CHEYO.NET"
kadmin.local -q "xst -k hadoop.keytab HTTP/data03@CHEYO.NET"

测试keytab是否可用

kinit -kt /opt/hadoop/etc/hadoop.keytab hadoop/ctrl
klist

为Zookeeper添加认证

修改zoo.cfg添加配置

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000

在配置目录中创建jaas.conf配置文件

内容如下:

Server {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    keyTab="/opt/hadoop/etc/hadoop.keytab"
    storeKey=true
    useTicketCache=false
    principal="hadoop/data01@CHEYO.NET";
};

其中keyTab填写真实的keytab的绝对路径,principal填写对应的认证的用户和机器名称.

在配置目录中添加java.env的配置文件

内容如下:

export JVMFLAGS="-Djava.security.auth.login.config=/opt/zookeeper/conf/jaas.conf"

每一个Zookeeper节点都需要做上述修改。

启动Zookeeper

启动方式与之前相同。

为HDFS添加认证

core-site.xml

  • hadoop.security.authorization:true
  • hadoop.security.authentication:kerberos
1
2
3
4
5
6
7
8
<property>
    <name>hadoop.security.authorization</name>
    <value>true</value>
</property>
<property>
    <name>hadoop.security.authentication</name>
    <value>kerberos</value>
</property>

hdfs-site.xml

  • dfs.block.access.token.enable:true
  • dfs.namenode.keytab.file: /xx/xx/hadoop.keytab
  • dfs.namenode.kerberos.principal: hadoop/_HOST@CHEYO.NET
  • dfs.namenode.kerberos.internal.spnego.principal: HTTP/_HOST@CHEYO.NET
  • dfs.datanode.keytab.file: /xx/xx/hadoop.keytab
  • dfs.datanode.kerberos.principal: hadoop/_HOST@CHEYO.NET
  • dfs.datanode.address: 0.0.0.0:1004 (小于1024)
  • dfs.datanode.http.address: 0.0.0.0:1006 (小于1024)
  • dfs.journalnode.keytab.file: /xx/xx/hadoop.keytab
  • dfs.journalnode.kerberos.principal: hadoop/_HOST@CHEYO.NET
  • dfs.journalnode.kerberos.internal.spnego.principal: HTTP/_HOST@CHEYO.NET
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
<property>
    <name>dfs.block.access.token.enable</name>
    <value>true</value>
</property>
<property>
    <name>dfs.namenode.keytab.file</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
    <name>dfs.namenode.kerberos.principal</name>
    <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>dfs.namenode.kerberos.internal.spnego.principal</name>
    <value>HTTP/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>dfs.datanode.keytab.file</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
    <name>dfs.datanode.kerberos.principal</name>
    <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:1004</value>
</property>
<property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:1006</value>
</property>
<property>
    <name>dfs.journalnode.keytab.file</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
    <name>dfs.journalnode.kerberos.principal</name>
    <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
    <value>HTTP/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>dfs.web.authentication.kerberos.principal</name>
    <value>HTTP/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>dfs.web.authentication.kerberos.keytab</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>

hadoop-env.sh

1
2
3
4
export HADOOP_SECURE_DN_USER=hadoop
export HADOOP_SECURE_DN_PID_DIR=/opt/hadoop/pids
export HADOOP_SECURE_DN_LOG_DIR=/opt/hadoop/logs
export JSVC_HOME=/opt/hadoop/libexec/

安装配置jsvc

重编译jsvc(使用yum安装的jsvc不适用):

wget http://mirror.bit.edu.cn/apache/commons/daemon/source/commons-daemon-1.0.15-native-src.tar.gz
tar zxf commons-daemon-1.0.15-native-src.tar.gz
cd commons-daemon-1.0.15-native-src/unix;
./configure
make
cp jsvc /opt/hadoop/libexec/

编译下载commons-daemon库:

wget http://mirror.bit.edu.cn/apache//commons/daemon/binaries/commons-daemon-1.0.15-bin.tar.gz
tar zxf commons-daemon-1.0.15-bin.tar.gz
cp commons-daemon-1.0.15/commons-daemon-1.0.15.jar /opt/hadoop/share/hadoop/hdfs/lib/
cp commons-daemon-1.0.15/commons-daemon-1.0.15.jar /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/
rm -f /opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar
rm -f /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.13.jar

配置拷贝到其他节点:

scp /opt/hadoop/etc/hadoop/hadoop-env.sh data01:/opt/hadoop/etc/hadoop/
scp /opt/hadoop/etc/hadoop/hadoop-env.sh data02:/opt/hadoop/etc/hadoop/
scp /opt/hadoop/etc/hadoop/hadoop-env.sh data03:/opt/hadoop/etc/hadoop/
scp /opt/hadoop/libexec/jsvc data01:/opt/hadoop/libexec/
scp /opt/hadoop/libexec/jsvc data02:/opt/hadoop/libexec/
scp /opt/hadoop/libexec/jsvc data03:/opt/hadoop/libexec/
scp /opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.15.jar data01:/opt/hadoop/share/hadoop/hdfs/lib/
scp /opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.15.jar data02:/opt/hadoop/share/hadoop/hdfs/lib/
scp /opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.15.jar data03:/opt/hadoop/share/hadoop/hdfs/lib/
scp /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.15.jar data01:/opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/
scp /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.15.jar data02:/opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/
scp /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.15.jar 
data03:/opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/
ssh data01 "rm -f /opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar"
ssh data02 "rm -f /opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar"
ssh data03 "rm -f /opt/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar"
ssh data01 "rm -f /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.13.jar"
ssh data02 "rm -f /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.13.jar"
ssh data03 "rm -f /opt/hadoop/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/commons-daemon-1.0.13.jar"

修改hadoop目录的所有权为hadoop租户

所有节点上执行如下操作:

chown -R hadoop:hadoop /opt/hadoop
chown -R hadoop:hadoop /opt/hadoop/*
chown -R hadoop:hadoop /opt/dfs

安装JCE

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
echo -e "\n\n#### 2. Install Java Cryptography Extensions\n\n"
COOKIES="oraclelicense=accept-securebackup-cookie;gpw_e24=http://edelivery.oracle.com"
JCE_DOWNLOAD_URL="http://download.oracle.com/otn-pub/java/jce/7/UnlimitedJCEPolicyJDK7.zip"
wget --no-check-certificate --header="Cookie: ${COOKIES}" -c "${JCE_DOWNLOAD_URL}" -O UnlimitedJCEPolicyJDK7.zip
unzip UnlimitedJCEPolicyJDK7.zip
cd UnlimitedJCEPolicyJDK7
mkdir /usr/java/jdk1.7.0_67/jre/lib/security/limit
mv /usr/java/jdk1.7.0_67/jre/lib/security/*.jar /usr/java/jdk1.7.0_67/jre/lib/security/limit/
cp *.jar /usr/java/jdk1.7.0_67/jre/lib/security/
scp *.jar data01:/usr/java/jdk1.7.0_67/jre/lib/security/
scp *.jar data02:/usr/java/jdk1.7.0_67/jre/lib/security/
scp *.jar data03:/usr/java/jdk1.7.0_67/jre/lib/security/

启动HDFS

在ctrl节点上,使用hadoop用户执行:

start-dfs.sh
stop-dfs.sh

在ctrl节点上,使用root用户执行:

sh start-secure-dns.sh
sh stop-secure-dns.sh

为Yarn添加认证

以下在各个节点上操作。

yarn-site.xml

  • yarn.resourcemanager.keytab:/opt/hadoop/etc/hadoop.keytab
  • yarn.resourcemanager.principal:hadoop/_HOST@CHEYO.NET
  • yarn.nodemanager.keytab:/opt/hadoop/etc/hadoop.keytab
  • yarn.nodemanager.principal:hadoop/_HOST@CHEYO.NET
  • yarn.nodemanager.container-executor.class:org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor
  • yarn.nodemanager.linux-container-executor.group:hadoop
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<property>
  <name>yarn.resourcemanager.keytab</name>
  <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
  <name>yarn.resourcemanager.principal</name>
  <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
  <name>yarn.nodemanager.keytab</name>
  <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
  <name>yarn.nodemanager.principal</name>
  <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
  <name>yarn.nodemanager.container-executor.class</name>
  <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
<property>
  <name>yarn.nodemanager.linux-container-executor.group</name>
  <value>hadoop</value>
</property>

mapred-site.xml

  • mapreduce.jobhistory.keytab:/opt/hadoop/etc/hadoop.keytab
  • mapreduce.jobhistory.principal:hadoop/_HOST@CHEYO.NET
1
2
3
4
5
6
7
8
<property>
  <name>mapreduce.jobhistory.keytab</name>
  <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
  <name>mapreduce.jobhistory.principal</name>
  <value>hadoop/_HOST@CHEYO.NET</value>
</property>

编译container-executor

  • 编译container-executor
cd /opt/SrcHadoop/hadoop-2.7.1-src
mvn package -Pdist,native -DskipTests -Dtar -Dcontainer-executor.conf.dir=/etc
cp hadoop-yarn-project/target/hadoop-yarn-project-2.7.1/bin/container-executor /opt/hadoop/bin/container-executor
#以下命令查看编译是否成功
strings /opt/hadoop/bin/container-executor | grep etc
#修改权限
chown root:hadoop  /opt/hadoop/bin/container-executor
chmod 4750 /opt/hadoop/bin/container-executor
  • 创建container-executor配置文件

/etc/container-executor.cfg

1
2
3
4
5
6
#运行container的用户
yarn.nodemanager.linux-container-executor.group=hadoop
#这个是允许运行应用的用户列表,默认是全部可以运行
#banned.users=
#这个是允许提交job的最小的userid的值。centos中一般用户的id在500以上。
min.user.id=500

启动Yarn

在ctrl节点上,使用hadoop用户执行:

start-yarn.sh
stop-yarn.sh

测试Yarn

hadoop jar /opt/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 16 1000

为Hive添加认证(1.2.1测试通过)

hive-site.xml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<property>
    <name>hive.server2.authentication</name>
    <value>KERBEROS</value>
</property>
<property>
    <name>hive.server2.authentication.kerberos.principal</name>
    <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>hive.server2.authentication.kerberos.keytab</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
    <name>hive.metastore.sasl.enabled</name>
    <value>true</value>
</property>
<property>
    <name>hive.metastore.kerberos.principal</name>
    <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>hive.metastore.kerberos.keytab.file</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
    <description>The path to the Kerberos Keytab file containing the
    metastore thrift server's service principal.</description>
</property>

core-site.xml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<property>
    <name>hadoop.proxyuser.hadoop.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hadoop.groups</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hue.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hue.groups</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.root.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.root.groups</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hive.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hive.groups</name>
    <value>*</value>
</property>

启动Hive

启动方法跟之前相同。

客户端连接

首先使用kinit进行kerberos登录:

kinit -kt /opt/hadoop/etc/hadoop.keytab hadoop/ctrl@CHEYO.NET
klist

然后,beeline使用如下方式连接:

beeline -u "jdbc:hive2://ctrl:10000/default;principal=hadoop/ctrl@CHEYO.NET"

或者:

beeline
> !connect jdbc:hive2://ctrl:10000/default;principal=hadoop/ctrl@CHEYO.NET
用户名密码不输入,直接回车

注意:在kinit中指定的principal要与beeline中指定的principal一致。并且客户端实际所在的主机名与beeline中的principal一致。

为HBase添加认证

hbase-site.xml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
<property>
    <name>hbase.master.kerberos.principal</name>
    <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>hbase.master.keytab.file</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
    <name>hbase.regionserver.kerberos.principal</name>
    <value>hadoop/_HOST@CHEYO.NET</value>
</property>
<property>
    <name>hbase.regionserver.keytab.file</name>
    <value>/opt/hadoop/etc/hadoop.keytab</value>
</property>
<property>
    <name>hbase.security.authentication</name>
    <value>kerberos</value>
</property>
<property>
    <name>hbase.rpc.engine</name>
    <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>hbase.coprocessor.region.classes</name>
    <value>org.apache.hadoop.hbase.security.token.TokenProvider,
    org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,
    org.apache.hadoop.hbase.security.access.AccessController</value>
</property>

hbase-env.sh

1
2
export HBASE_OPTS="$HBASE_OPTS -Djava.security.auth.login.config=/opt/hbase/conf/zk-jaas.conf"
export HBASE_MANAGES_ZK=false

zk-jaas.conf

新增/opt/hbase/conf/zk-jaas.conf文件,内容如下:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  useTicketCache=false
  keyTab="/opt/hadoop/etc/hadoop.keytab"
  principal="hadoop/ctrl@HADOOP.COM";
};

以上配置在所有HBase节点上完成。

注意:/opt/hbase/conf/zk-jaas.conf中,各个节点的principal的斜杠后的主机名不同。

zoo.cfg(Zookeeper配置文件)

重要

skipACL=yes

以上操作在所有Zookeeper节点上进行。

> 如果没有此配置,会出现InvalidACL for /hbase的错误,导致HMaster无法正常启动。

为Storm添加认证

Storm的supervisor需要连接Zookeeper,所以storm需要配置连接zk使用的证书。

配置Storm连接Zookeeper使用Kerberos认证

在storm的配置文件中增加如下配置(所有节点的配置文件内容相同)

1
supervisor.childopts: "-Djava.security.auth.login.config=/opt/hbase/conf/zk-jaas.conf"

备注:这里是使用使用HBase的zk-jaas.conf文件。也可以独立为storm创建一个zk-jaas.conf文件。

配置连接Storm采用Kerberos

待补充,见参考文档5[Storm Security].

参考文档

  1. Hadoop的kerberos的实践部署
  2. Hadoop 配置 Data Transfer Protocol 使用 SASL 认证
  3. Hadoop Kerberos安全机制介绍
  4. 详解Hive配置Kerberos认证
  5. Storm Security
Tags: #Kerberos    Post on Hadoop