Follow these steps to install and run HaMu on your system.
First, download the HaMu repository to your local machine:
git clone https://github.com/DOCUTEE/HaMu.git
cd HaMuBuilding Docker images is required only for the first time or after making changes in the HaMu directory (such as modifying the owner name). Make sure Docker is running before proceeding.
⏳ Note:
The first build may take a few minutes as no cached layers exist.
You need to add execution permission
chmod +x linux/*to these files in the linux folder.If you don't have enough permissions, you may need to run the command as a superuser
sudo.
./linux/build-image.shBy default, running the command below will launch a Hadoop cluster with 3 nodes (1 master and 2 slaves):
./linux/start-cluster.shIf you want to customize the number of slave nodes, specify the total number of nodes (master + slaves) as an argument. For example, to start a cluster with 1 master and 5 slaves (6 nodes total):
./linux/start-cluster.sh 6After Step 3, you will be inside the master container's CLI, where you can interact with the cluster.
1️⃣ Start the HDFS services:
start-dfs.sh2️⃣ Check active DataNodes:
hdfs dfsadmin -reportIf you see live DataNodes, your cluster is running successfully. 🚀
