Skip to content

Latest commit

 

History

History
51 lines (42 loc) · 1.71 KB

File metadata and controls

51 lines (42 loc) · 1.71 KB

Follow these steps to install and run HaMu on your system.

Step 1: Clone the Repository

First, download the HaMu repository to your local machine:

git clone https://github.com/DOCUTEE/HaMu.git
cd HaMu

Step 2: Build Docker Images (Optional)

Building Docker images is required only for the first time or after making changes in the HaMu directory (such as modifying the owner name). Make sure Docker is running before proceeding.

⏳ Note:

  • The first build may take a few minutes as no cached layers exist.

  • You need to add execution permission chmod +x linux/* to these files in the linux folder.

  • If you don't have enough permissions, you may need to run the command as a superuser sudo.

./linux/build-image.sh

Step 3: Enjoy your Hadoop Cluster

By default, running the command below will launch a Hadoop cluster with 3 nodes (1 master and 2 slaves):

./linux/start-cluster.sh

If you want to customize the number of slave nodes, specify the total number of nodes (master + slaves) as an argument. For example, to start a cluster with 1 master and 5 slaves (6 nodes total):

./linux/start-cluster.sh 6

Step 4: Verify the Installation

After Step 3, you will be inside the master container's CLI, where you can interact with the cluster.

1️⃣ Start the HDFS services:

start-dfs.sh

2️⃣ Check active DataNodes:

hdfs dfsadmin -report

📌 Expected Output: Deme

If you see live DataNodes, your cluster is running successfully. 🚀