SoFunction
Updated on 2024-11-10

Detailed process of installing standalone HBase on linux

Installation of standalone HBase for linux

Pre-existing environment for installing HBase

1, JDK environment
2、hadoop environment
3, zookeeper environment (optional, HBase comes with zookeeper)

Reference address for each environment./
Optional java versions for each version of HBase

Optional hadoop versions for each HBase version

在这里插入图片描述

Version information related to this installation
1、Java 1.8
download address/java/technologies/downloads/
2、Hadoop-3.3.
Download Address:/dist/hadoop/common/
Domestic address:/apache/hadoop/common
3、zookeeper-3.7.
download address/zookeeper/
4、HBase-2.5.
Official website:/
Download Address:/dist/hbase/

The installation of all the java packages are placed in the /opt/firma/ directory, decompressed files are placed in the /opt/app/ path

Installation of the Java environment

1, check whether the current version of the jdk, whether with the need to install the HBase version of the adaptive

# Check if java is installed
[root@localhost /]# java -version
java version "21.0.3" 2024-04-16 LTS

2, install java1.8

# Unzip the uploaded jar to the specified directory
tar -zxvf  -C /opt/app/
# Configure environment variables
 vim /etc/profile

3, add java environment variables

export JAVA_HOME=/opt/app/jdk1.8.0_411
export PATH=$JAVA_HOME/bin:$PATH

4. Reload the configuration by typing: source /etc/profile

source /etc/profile
# View java version for testing
java -version

Installing Hadoop

1、Configure password-free login

1.1 Setting up a password-free

ssh-keygen -t rsa -P ''

1.2 Generate a key pair without a password, ask for the path to save it and type enter directly

1.3 Generate key pairs: id_rsa and id_rsa.pub, which are stored in the ~/.ssh directory by default. Next: append id_rsa.pub to the authorized key.

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# Modify permissions
chmod 600 ~/.ssh/authorized_keys
# Need to enable RSA authentication, start public-private key pairing authentication method
vim /etc/ssh/sshd_config

1.4 If prompted for insufficient privileges prefix the command with sudo Modify the ssh configuration

PubkeyAuthentication yes # Enable public-private key pairing authentication method
AuthorizedKeysFile %h/.ssh/authorized_keys # Public Key File Path

1.5 Restarting SSH

service ssh restart

1.6 This step reports an error: Failed to restart : Unit not found., use the following command to restart

systemctl restart sshd

2、Configure environment variables

# Unzip the uploaded jar to the specified directory
tar -zxvf  hadoop-3.3. -C /opt/app/
# Configure environment variables
 vim /etc/profile

2.1 Add Hadoop environment variable configuration

export HADOOP_HOME=/opt/app/hadoop-3.3.6
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

2.2 Enabling environment variables source /etc/profile

source /etc/profile

2.3 Execute the hadoop version and verify that it is configured correctly

[root@localhost firma]# hadoop version
Hadoop 3.3.6
Source code repository /apache/ -r 1be78238728da9266a4f88195058f08fd012bf9c
Compiled by ubuntu on 2023-06-18T08:22Z
Compiled on platform linux-x86_64
Compiled with protoc 3.7.1
From source with checksum 5652179ad55f76cb287d9c633bb53bbd
This command was run using /opt/app/hadoop-3.3.6/share/hadoop/common/hadoop-common-3.3.

3、Configure Hadoop related files
3.1 In total, you need to modify three files, the path is under $HADOOP_HOME/etc/hadoop/.


cd /opt/app/hadoop-3.3.6/etc/hadoop/
# Modify
vim 
# Add jdk configuration to the file, set the path where the jdk is stored
export JAVA_HOME=/opt/app/jdk1.8.0_411
cd /opt/app/hadoop-3.3.6/etc/hadoop/
# Modify
vim 
# Add a little configuration
<configuration>
    <property>
        <!--indicate clearly and with certainty namenode (used form a nominal expression) hdfs 协议文件系统(used form a nominal expression)通信地址-->
        <name></name>
        <value>hdfs://localhost:8020</value>
    </property>
    <property>
        <!--indicate clearly and with certainty hadoop Data File Storage Directory-->
        <name></name>
        <value>/opt/app/hadoop/data</value>
    </property>
</configuration>
# Modify
vim 
# Add the following configuration
<configuration>
    <property>
        <!--Since our build here is a standalone version,So specify dfs The copy factor for 1-->
        <name></name>
        <value>1</value>
    </property>
</configuration>

4, turn off the firewall, do not turn off the firewall may lead to inability to access Hadoop's Web UI interface

# View firewall status
sudo firewall-cmd --state
# Disable the firewall.
sudo systemctl stop firewalld
# Disable boot-up
sudo systemctl disable firewalld

5、The first time you start Hadoop, you need to initialize it, enter /opt/app/hadoop-3.3.6/bin directory, execute the following commands

cd /opt/app/hadoop-3.3.6/bin
# Perform initialization
./hdfs namenode -format

Hadoop 3 does not allow the use of the root user to start the cluster with a single click, you need to configure the startup user

cd /opt/app/hadoop-3.3.6/bin
# Edit, add the following at the top
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

6, start HDFS, enter /opt/app/hadoop-3.3.6/bin directory, start HDFS

cd /opt/app/hadoop-3.3.6/bin
# Perform a reboot
./

7. Verify startup

Way 1: Execute jps to see if the NameNode and DataNode services have started:

[root@localhost hadoop]# jps
6050 Jps
23909 NameNode
24074 DataNode
24364 SecondaryNameNode

Mode 2: Visit http://localhost:9870/

Installing HBase

Hbase depends on hadoop's hdfs, zookeeper and java environment

1、Download and unzip to set environment variables

# Unzip the uploaded jar to the specified directory
tar -zxvf  hbase-2.5. -C /opt/app/
# Configure environment variables
vim /etc/profile
# Add environment variables
export HBASE_HOME=/opt/app/hbase-2.5.8
export PATH=$PATH:${HBASE_HOME}/bin
# Reload environment variables
source /etc/profile

2. Modify the hbase configuration file,

# Modify
vim /opt/app/hbase-2.5.8/conf/
# Add a little configuration
# Requirements jdk1.8+
export JAVA_HOME=/opt/app/jdk1.8.0_411
#Configure whether Hbase uses the built-in zookeeper or not
export HBASE_MANAGES_ZK=true
# Modify
vim /opt/app/hbase-2.5.8/conf/
<configuration>
	<!-- falseIt's stand-alone mode.,trueIt's a distributed model.。-->
 	<!-- Distributed meanshbasecap (a poem)zookeeperRunning on a differentjvm,assume (office)hbaseexternalzookeeper -->
	<property>
		<name></name>
		<value>false</value>
	</property>
 	<!-- hbaseLocation of storage,generalhbasedata existencehdfs,Here.hdfsIt can be a standalone-->
	 <property>
		<name></name>
		<!-- Here.hdfsThe address should be the same ashadoop > core > sitehdfsaddress consistency -->
		<value>hdfs://localhost:8020/hbase</value>
 	</property>
 	<!-- Starting without this will result in an error -->
	<property>
		<name></name>
		<value>false</value>
	</property>
	 <!-- zk主机地址cap (a poem)端口采用默认的,No need to configure  -->
	 <!-- The default will be based on theregionserverGo to the file.,The default islocalhost:2181 -->
</configuration>

3. Initiation

cd /opt/app/hbase-2.1.2/bin
# Launch
./

4. Test for success
Method 1: Enter: http://localhost1:16010/ in your browser to display this page logo successfully

在这里插入图片描述

Show this page logo success

Method II

# Enter the jps command, the presence of the HMaster identifies a successful startup
[root@localhost bin]# jps
23909 NameNode
24074 DataNode
15722 Jps
15147 HQuorumPeer
15275 HMaster
24364 SecondaryNameNode
# No HRegionServer in stand-alone mode
15486 HRegionServer
# You can use the hbase shell, list view table command to see if there are any errors.
[root@localhost logs]# hbase shell
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: /2.0/#shell
Version 2.5.8, r37444de6531b1bdabf2e445c83d0268ab1a6f919, Thu Feb 29 15:37:32 PST 2024
Took 0.0010 seconds
hbase:001:0> list
TABLE
0 row(s)
Took 0.2467 seconds
=> []
hbase:002:0> exit

to this article on linux installation of stand-alone HBase article is introduced to this, more related linux installation of stand-alone HBase content please search for my previous articles or continue to browse the following related articles I hope you will support me in the future!