From e5c72d82170d9639e716c4057e11706cd509e51c Mon Sep 17 00:00:00 2001 From: Jhon Saldarriaga Date: Thu, 2 Mar 2023 00:52:03 -0500 Subject: [PATCH 1/6] filesystem automation --- .gitignore | 14 ++++++ README.md | 79 ++++++++++++++++++++++++++++++---- Vagrantfile | 62 ++++++++++++++++++++++++++ ansible.cfg | 7 +++ ansible_hosts | 6 +++ init_distributed_filesystem.sh | 5 +++ playbooks/nodemaster-conf.yml | 21 +++++++++ playbooks/nodes-conf.yml | 19 ++++++++ scripts/configuration.sh | 3 ++ scripts/glusterfs.sh | 14 ++++++ 10 files changed, 221 insertions(+), 9 deletions(-) create mode 100644 .gitignore create mode 100644 Vagrantfile create mode 100644 ansible.cfg create mode 100644 ansible_hosts create mode 100644 init_distributed_filesystem.sh create mode 100644 playbooks/nodemaster-conf.yml create mode 100644 playbooks/nodes-conf.yml create mode 100644 scripts/configuration.sh create mode 100644 scripts/glusterfs.sh diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..055410e --- /dev/null +++ b/.gitignore @@ -0,0 +1,14 @@ +# Created by https://www.toptal.com/developers/gitignore/api/vagrant +# Edit at https://www.toptal.com/developers/gitignore?templates=vagrant + +### Vagrant ### +# General +.vagrant/ + +# Log files (if you are creating logs in debug mode, uncomment this) +# *.log + +### Vagrant Patch ### +*.box + +# End of https://www.toptal.com/developers/gitignore/api/vagrant diff --git a/README.md b/README.md index c7e8ba0..30cf497 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,72 @@ -# sd-workshop2 2022-1 -sd workshop2 +Para ejecutar el escenario del filesystem mediante un gluster (1 nodo master y 2 nodos), ejecutar el script ./init_distributed_filesystem -- Completar la lógica de la applicación de pagos de tal manera que al hacer un pago através del microservicio de pagos, el monto de las facturas sea correctamente debitado, es decir, actualmente si una factura debe 1000 y yo hago un pago por 400 a esa factura, el microservicio invoice me sobreescribe el 1000 por 400 en vez de mostrarme el saldo restante 1000-400=600. -- Completar la lógica de la aplicación de tal manera que haya 3 estados para las facturas. 0=debe 1=pagadoparcialmente 2=pagado -- Hacer que las applicaciones se puedan registrar con consul -- Debe ser un pull request a este repositorio sd-workshop2 +# Distributed File System (With Glusterfs) + +![alt text](https://docs.gluster.org/en/v3/images/640px-GlusterFS_Architecture.png "gluster") +> Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. + +## Volumes + +- Distributed Glusterfs Volume +![img](https://cloud.githubusercontent.com/assets/10970993/7412364/ac0a300c-ef5f-11e4-8599-e7d06de1165c.png) + +- Replicated Glusterfs Voume +![img2](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png) + +- Striped Glusterfs Volume +![img3](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png) + +### Glusterfs (Inicialización) + +Into master node +``` +$ sudo gluster peer probe node-1 +$ sudo gluster peer probe node-2 +$ gluster pool list +$ sudo gluster volume create gv0 replica 3 master:/gluster/data/gv0 node-1:/gluster/data/gv0 node-2:/gluster/data/gv0 +$ sudo gluster volume set gv0 auth.allow 127.0.0.1 +$ sudo gluster volume start gv0 +``` + +Each node +``` +$ sudo mount.glusterfs localhost:/gv0 /mnt +``` + +Para añadir un nuevo servidor + +| Command | Description | +|---|---| +| gluster peer status | Consulte el estado del cluster | +| gluster peer probe node4 | Adicione el nuevo nodo | +| gluster volume status | Anote el nombre del volumen | +| gluster volume add-brick swarm-vols replica 5 node4:/gluster/data/swarm-vols | TODO: Verificar este comando | + +Para remover un nodo del cluster se requiere primero remover sus bricks de los volumenes asociados + +| Command | Description | +|---|---| +| gluster volume info | Consulte los identificadores de los bricks actuales | +| gluster volume remove-brick swarm-vols replica 1 node1:/gluster/data force | Remueve un brick de un volumen con dos replicas | +| gluster peer detach node1 | Remueve un nodo del cluster | + +Eliminar un volumen + +| Command | Description | +|---|---| +| gluster volume stop swarm-vols | Detenga el volumen | +| gluster volume delete swarm-vols | Elimine el volumen | + + +### References +* https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/ +* https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-glusterfs-array/ +* https://support.rackspace.com/how-to/add-and-remove-glusterfs-servers/ +* http://embaby.com/blog/using-glusterfs-docker-swarm-cluster/ +* https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/ +* http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html +* https://www.cyberciti.biz/faq/linux-how-to-delete-a-partition-with-fdisk-command/ +* https://support.rackspace.com/how-to/getting-started-with-glusterfs-considerations-and-installation/ +* https://everythingshouldbevirtual.com/virtualization/vagrant-adding-a-second-hard-drive/ +* https://www.jamescoyle.net/how-to/351-share-glusterfs-volume-to-a-single-ip-address -Bonus: -- Subir las imagenes de la app a Docker hub -- Crear un script en bash que lance toda la aplicación. diff --git a/Vagrantfile b/Vagrantfile new file mode 100644 index 0000000..deba536 --- /dev/null +++ b/Vagrantfile @@ -0,0 +1,62 @@ +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# All Vagrant configuration is done below. The "2" in Vagrant.configure +# configures the configuration version (we support older styles for +# backwards compatibility). Please don't change it unless you know what +# you're doing. + +firstDisk = './firstDisk.vdi' +secondDisk = './secondDisk.vdi' +thirdDisk = './thirdDisk.vdi' +fourthDisk = './fourthDisk.vdi' +Vagrant.configure("2") do |config| + + config.ssh.insert_key = false + config.vm.define "nodemaster" do |lb| + lb.vm.box = "centos/7" + lb.vm.hostname = "nodemaster" + lb.vm.network "private_network", ip: "192.168.69.10" + lb.vm.provider "virtualbox" do |vb| + vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "nodemaster"] + unless File.exist?(firstDisk) + vb.customize ['createhd', '--filename', firstDisk, '--variant', 'Fixed', '--size', 5 * 1024] + end + vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', firstDisk] + end + lb.vm.provision "shell", path: "scripts/glusterfs.sh" + lb.vm.provision "shell", path: "scripts/configuration.sh" + end + + config.vm.define "node1" do |node1| + node1.vm.box = "centos/7" + node1.vm.hostname = "node-1" + node1.vm.network "private_network", ip: "192.168.69.100" + node1.vm.provider "virtualbox" do |vb| + vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-1"] + unless File.exist?(secondDisk) + vb.customize ['createhd', '--filename', secondDisk, '--variant', 'Fixed', '--size', 5 * 1024] + end + vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', secondDisk] + end + node1.vm.provision "shell", path: "scripts/glusterfs.sh" + node1.vm.provision "shell", path: "scripts/configuration.sh" + end + + config.vm.define "node2" do |node2| + node2.vm.box = "centos/7" + node2.vm.hostname = "node-2" + node2.vm.network "private_network", ip: "192.168.69.101" + node2.vm.provider "virtualbox" do |vb| + vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-2"] + unless File.exist?(thirdDisk) + vb.customize ['createhd', '--filename', thirdDisk, '--variant', 'Fixed', '--size', 5 * 1024] + end + vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', thirdDisk] + end + node2.vm.provision "shell", path: "scripts/glusterfs.sh" + node2.vm.provision "shell", path: "scripts/configuration.sh" + end + +end + diff --git a/ansible.cfg b/ansible.cfg new file mode 100644 index 0000000..0a86b33 --- /dev/null +++ b/ansible.cfg @@ -0,0 +1,7 @@ +[defaults] +inventory=./ansible_hosts +remote_user=vagrant +private_key_file=$HOME/.vagrant.d/insecure_private_key +host_key_checking=False +retry_files_enabled=False +#interpreter_python=auto_silent diff --git a/ansible_hosts b/ansible_hosts new file mode 100644 index 0000000..480d2ca --- /dev/null +++ b/ansible_hosts @@ -0,0 +1,6 @@ +[nodes] +node1 ansible_ssh_host=192.168.69.100 +node2 ansible_ssh_host=192.168.69.101 + +[masters] +master ansible_ssh_host=192.168.69.10 diff --git a/init_distributed_filesystem.sh b/init_distributed_filesystem.sh new file mode 100644 index 0000000..d290a6a --- /dev/null +++ b/init_distributed_filesystem.sh @@ -0,0 +1,5 @@ +#!/bin/bash + +vagrant up +ansible-playbooks playbooks/nodemaster-conf.yml +ansible-playbooks playbooks/nodes-conf.yml \ No newline at end of file diff --git a/playbooks/nodemaster-conf.yml b/playbooks/nodemaster-conf.yml new file mode 100644 index 0000000..a238b25 --- /dev/null +++ b/playbooks/nodemaster-conf.yml @@ -0,0 +1,21 @@ +--- +- hosts: master + become: true + tasks: + - name: Start gluster + systemd: + name: glusterd + state: started + + - name: Peer every node + gluster_peer: + state: present + nodes: + - node1 + - node2 + + - name: Create the volume + shell: sudo gluster volume create gv0 replica 3 master:/data node1:/data node2:/data force + + - name: Start volume + shell: sudo gluster volume start gv0 diff --git a/playbooks/nodes-conf.yml b/playbooks/nodes-conf.yml new file mode 100644 index 0000000..1aeed55 --- /dev/null +++ b/playbooks/nodes-conf.yml @@ -0,0 +1,19 @@ +--- +- hosts: nodes + become: true + tasks: + - name: Mount GlusterFS + mount: + name: /mnt + fstype: glusterfs + opts: _netdev,defaults + src: "localhost:/gv0" + state: mounted +- hosts: node1 + tasks: + - name: Create file + shell: echo "Hello world!" | sudo tee /mnt/helloworld-node1.txt +- hosts: node2 + tasks: + - name: Create file + shell: echo "Hello world!" | sudo tee /mnt/helloworld-node2.txt diff --git a/scripts/configuration.sh b/scripts/configuration.sh new file mode 100644 index 0000000..e2bb00d --- /dev/null +++ b/scripts/configuration.sh @@ -0,0 +1,3 @@ +echo "192.168.69.10 master" >> /etc/hosts +echo "192.168.69.100 node1" >> /etc/hosts +echo "192.168.69.101 node2" >> /etc/hosts diff --git a/scripts/glusterfs.sh b/scripts/glusterfs.sh new file mode 100644 index 0000000..0b5de03 --- /dev/null +++ b/scripts/glusterfs.sh @@ -0,0 +1,14 @@ +yum install -y centos-release-gluster +yum install -y glusterfs-server +# yum install -y xfsprogs +service glusterd start + +sfdisk /dev/sdb << EOF +; +EOF + +mkfs.xfs /dev/sdb1 +mkdir -p /gluster/data +# echo "/dev/sdb1 /gluster/data xfs default 1 2" >> /etc/fstab +mount /dev/sdb1 /gluster/data/ +#mount -a && mount From cc1e8d474245580deccbd4eafa3e01e935b87704 Mon Sep 17 00:00:00 2001 From: Jhon Saldarriaga <54784347+JhonSaldarriaga@users.noreply.github.com> Date: Thu, 2 Mar 2023 01:33:07 -0500 Subject: [PATCH 2/6] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 67aedbe..7a5cb77 100644 --- a/README.md +++ b/README.md @@ -4,8 +4,8 @@ Para ejecutar el escenario del filesystem mediante un gluster (1 nodo master y 2 Si aparece un error similar a: "The IP address configured for the host-only network is not within the allowed ranges. Please update the address used to be within the allowed ranges and run the command again." -Crea un nuevo archivo en /etc/vbox/networks.conf con el siguiente contenido: -\* 10.0.0.0/8 192.168.0.0/16 +Crea un nuevo archivo en /etc/vbox/networks.conf con el siguiente contenido:\n +\* 10.0.0.0/8 192.168.0.0/16\n \* 2001::/64 # Distributed File System (With Glusterfs) From f1436b61309a1db75f797edf8392cc3275202a1c Mon Sep 17 00:00:00 2001 From: Jhon Saldarriaga <54784347+JhonSaldarriaga@users.noreply.github.com> Date: Thu, 2 Mar 2023 01:34:03 -0500 Subject: [PATCH 3/6] Update README.md --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 7a5cb77..66cd5d2 100644 --- a/README.md +++ b/README.md @@ -4,10 +4,10 @@ Para ejecutar el escenario del filesystem mediante un gluster (1 nodo master y 2 Si aparece un error similar a: "The IP address configured for the host-only network is not within the allowed ranges. Please update the address used to be within the allowed ranges and run the command again." -Crea un nuevo archivo en /etc/vbox/networks.conf con el siguiente contenido:\n -\* 10.0.0.0/8 192.168.0.0/16\n -\* 2001::/64 - +Crea un nuevo archivo en /etc/vbox/networks.conf con el siguiente contenido: +"* 10.0.0.0/8 192.168.0.0/16" +"* 2001::/64" +(!Quitamos las "") # Distributed File System (With Glusterfs) ![alt text](https://docs.gluster.org/en/v3/images/640px-GlusterFS_Architecture.png "gluster") From 54247a93a4618bbf78e354dac8b7c7a0053ae11e Mon Sep 17 00:00:00 2001 From: Jhon Saldarriaga <54784347+JhonSaldarriaga@users.noreply.github.com> Date: Thu, 2 Mar 2023 01:35:59 -0500 Subject: [PATCH 4/6] Update README.md --- README.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 66cd5d2..58a9cf9 100644 --- a/README.md +++ b/README.md @@ -5,9 +5,11 @@ Si aparece un error similar a: "The IP address configured for the host-only network is not within the allowed ranges. Please update the address used to be within the allowed ranges and run the command again." Crea un nuevo archivo en /etc/vbox/networks.conf con el siguiente contenido: -"* 10.0.0.0/8 192.168.0.0/16" -"* 2001::/64" -(!Quitamos las "") + +\* 10.0.0.0/8 192.168.0.0/16 + +\* 2001::/64 + # Distributed File System (With Glusterfs) ![alt text](https://docs.gluster.org/en/v3/images/640px-GlusterFS_Architecture.png "gluster") From c82d555a43a11d1fd2788359e24f094baa89d8b5 Mon Sep 17 00:00:00 2001 From: Jhon Saldarriaga Date: Thu, 2 Mar 2023 06:31:50 -0500 Subject: [PATCH 5/6] edit README --- README.md | 10 +++++++++- init_distributed_filesystem.sh | 6 +++--- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 30cf497..67aedbe 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,12 @@ -Para ejecutar el escenario del filesystem mediante un gluster (1 nodo master y 2 nodos), ejecutar el script ./init_distributed_filesystem +Para ejecutar el escenario del filesystem mediante un gluster (1 nodo master y 2 nodos): +- Damos permisos de ejecución al script init_distributed_filesystem: chmod +x init_distributed_filesystem.sh +- Ejecutamos el script ./init_distributed_filesystem +Si aparece un error similar a: +"The IP address configured for the host-only network is not within the allowed ranges. +Please update the address used to be within the allowed ranges and run the command again." +Crea un nuevo archivo en /etc/vbox/networks.conf con el siguiente contenido: +\* 10.0.0.0/8 192.168.0.0/16 +\* 2001::/64 # Distributed File System (With Glusterfs) diff --git a/init_distributed_filesystem.sh b/init_distributed_filesystem.sh index d290a6a..5bc984f 100644 --- a/init_distributed_filesystem.sh +++ b/init_distributed_filesystem.sh @@ -1,5 +1,5 @@ #!/bin/bash -vagrant up -ansible-playbooks playbooks/nodemaster-conf.yml -ansible-playbooks playbooks/nodes-conf.yml \ No newline at end of file +#vagrant up +ansible-playbook playbooks/nodemaster-conf.yml +ansible-playbook playbooks/nodes-conf.yml From 960a664012493c9d1116c8feb512034d590f0e98 Mon Sep 17 00:00:00 2001 From: Jhon Saldarriaga Date: Thu, 2 Mar 2023 06:38:41 -0500 Subject: [PATCH 6/6] fix --- init_distributed_filesystem.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/init_distributed_filesystem.sh b/init_distributed_filesystem.sh index 5bc984f..fb4f94b 100644 --- a/init_distributed_filesystem.sh +++ b/init_distributed_filesystem.sh @@ -1,5 +1,5 @@ #!/bin/bash -#vagrant up +vagrant up ansible-playbook playbooks/nodemaster-conf.yml ansible-playbook playbooks/nodes-conf.yml