如果前面没搭建docker,详情请看这篇文章:https://abytelalala.cn/index.php/2024/06/24/ubentu%e4%bb%8e%e9%9b%b6%e5%bc%80%e5%a7%8b%e9%83%a8%e7%bd%b2docker/
nano /etc/docker/daemon.json
写入如下三行内容
{
"registry-mirrors": ["https://docker.foreverlink.love/"]
}
service docker restart
docker info//验证成功替换成功没有
docker pull ubuntu:latest
docker run -it ubuntu:latest
apt-get update
apt update
apt upgrade
net-tools
apt install net-tools
java
apt-get install openjdk-8-jdk
安装包
下载安装包
hadoop-3.2.2.tar.gz,
hbase-2.5.8-bin.tar.gz,
apache-zookeeper-3.7.2-bin.tar.gz
拖动到mobaXterm的虚拟机页面目录
解压移动等操作
接下来操作需要再开一个新终端,保留之前的终端
su -
docker ps //查看指定的容器id
docker cp /home/cust/hadoop-3.2.2.tar.gz <指定容器id>:/usr/local/ //移动到docker里
docker cp /home/cust/hbase-2.5.8-bin.tar.gz <指定容器id>:/usr/local/
docker cp /home/cust/apache-zookeeper-3.7.2-bin.tar.gz <指定容器id>:/usr/local/ //这两个都同理
然后进入原容器的终端,输入如下命令
cd /usr/local
ls
//查看是否转移到这里来了
tar -zxvf hadoop-3.2.2.tar.gz
mv hadoop-3.2.2 /usr/local/hadoop
tar -zxvf apache-zookeeper-3.7.2-bin.tar.gz
mv apache-zookeeper-3.7.2-bin /usr/local/zookeeper
tar -zxvf hbase-2.5.8-bin.tar.gz
mv hbase-2.5.8 /usr/local/hbase //具体你们文件解压后是什么名字,具体用ls命令来查看修改
添加环境变量
apt install nano
nano /etc/profile
在/etc/profile添加如下内容
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
HADOOP_HOME=/usr/local/hadoop
export PATH=$JAVA_HOME/bin:$PATH
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
#root启动,添加以下环境
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
export HBASE_HOME=/usr/local/hbase
export PATH=$PATH:$HBASE_HOME/bin
ssh免密登入
apt-get update
apt-get install systemd
apt-get update --fix-missing
apt-get install openssh-server
ssh-keygen -t rsa //遇见输入密码什么的都回车
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
service ssh start
service ssh status
//检查一下sshd是否开启
ssh localhost
//运行完成会弹出一个问你第一次登陆是否的问题
//你要回答yes,不要只回答一个y
hadoop配置文件
在/usr/local/hadoop/etc/hadoop/文件夹下
hadoop-env.sh文件
加入:
# 显式声明java路径
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/
core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<!-- 配置到hadoop目录下temp文件夹 -->
<value>file:/usr/local/hadoop/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
hdfs-site.xml文件
<configuration>
<property>
<!--指定hdfs保存数据副本的数量,包括自己,默认为3-->
<!--伪分布式模式,此值必须为1-->
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<!-- name node 存放 name table 的目录 -->
<value>file:/usr/local/hadoop/tmp/hdfs/name</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop02:50090</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<!-- data node 存放数据 block 的目录 -->
<value>file:/usr/local/hadoop/tmp/hdfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
mapred-site.xml 文件
<configuration>
<property>
<!--指定mapreduce运行在yarn上-->
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
</configuration>
yarn-site.xml 文件
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop01</value>
</property>
<property>
<!--NodeManager获取数据的方式-->
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
workers文件
hadoop01
hadoop02
hadoop03
zookeeper配置
在/usr/local/zookeeper/conf/文件夹下
zoo.cfg文件
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
clientPort=2181
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
myid文件
在集群每台机器的 dataDir 目录下建立 myid 文件,文件内容与 server.x 的 x 值一致,我们这里是在镜像修改配置,所以先都弄成1,之后再改。
cd /usr/local/zookeeper
mkdir data
nano myid
myid内容如下:
1
hbase配置
在/usr/local/hbase/conf/文件夹下
hbase-site.xml文件
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://hadoop01:9000/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop01,hadoop02,hadoop03</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/local/zookeeper/data</value>
</property>
</configuration>
hbase-env.sh文件
加入:
export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP=true
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HBASE_CLASSPATH=/usr/local/hbase/conf
export HBASE_MANAGES_ZK=flase
backup-master文件
hadoop01
regionservers文件
hadoop01
hadoop02
hadoop03
如果没搭建hbase,详情请看这篇https://abytelalala.cn/index.php/2024/06/26/%e5%9f%ba%e4%ba%8eubentudocker%e6%90%ad%e5%bb%bahadoop%ef%bc%8czookeeper%ef%bc%8chbase/
安装包
下载安装包
scala-2.12.13.tgz
spark-3.0.0-bin-hadoop3.2.tgz
jackson-databind-2.10.1.jar
//下载链接Central Repository: com/fasterxml/jackson/core/jackson-databind/2.10.1 (maven.org)
scala-2.12.13.tgz
spark-3.0.0-bin-hadoop3.2.tgz
拖动到mobaXterm的虚拟机页面/home/cust目录
在容器hadoop01里输入下面两行命令
su -
mkdir /jara
jackson-databind-2.10.1.jar
自己新建jara文件夹,将这个jar拖动到mobaXterm的虚拟机页面/home/cust/jara目录
移动到hadoop01上
su -
docker ps //查看指定的容器id
docker cp /home/cust/jara/jackson-databind-2.10.1.jar <指定容器id>:/jara/ //移动到docker里
docker cp /home/cust/scala-2.12.13.tgz <指定容器id>:/usr/local/ //移动到docker里
docker cp /home/cust/spark-3.0.0-bin-hadoop3.2.tgz <指定容器id>:/usr/local/ //移动到docker里
进入hadoop01终端
docker exec -it hadoop01 bash
配置spark
cd /usr/local
tar -zxvf spark-3.0.0-bin-hadoop3.2.tgz
mv spark-3.0.0-bin-hadoop3.2 /usr/local/spark
cd /usr/local/spark/conf
mv spark-defaults.conf.template spark-defaults.conf
mv slaves.template slaves
spark-defaults.conf
nano spark-defaults.conf
spark.driver.extraClassPath /jara/jackson-databind-2.10.1.jar
spark.executor.extraClassPath /jara/jackson-databind-2.10.1.jar
slaves
nano slaves
hadoop01
hadoop02
hadoop03
spark-env.sh
nano spark-env.sh
SPARK_MASTER_HOST=hadoop01
SPARK_MASTER_PORT=7077
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop/
export SPARK_CLASSPATH=/usr/local/hbase/lib/*
export SCALA_HOME=/usr/local/spark/
export HADOOP_CONF_DIR=/usr/loacl/hadoop/etc/hadoop/
export SPARK_WORKER_MERMORY=8G
spark-config.sh
cd ../sbin/
nano spark-config.sh
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
环境变量配置
cd /usr/local/
tar -zxvf scala-2.12.13.tgz
mv scala-2.12.13 /usr/local/scala
nano /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=.:$SCALA_HOME/bin:$PATH
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
source /etc/profile
提交镜像
docker commit hadoop01 sparknew