From 3fb0ba8fedb8867a8af27ee3b8c8d8248eaf1bec Mon Sep 17 00:00:00 2001 From: ValeArias07 Date: Wed, 1 Mar 2023 15:09:33 -0500 Subject: [PATCH 1/5] playbooks and config added --- 04_distributed_filesystem/.gitignore | 14 ++++ 04_distributed_filesystem/README.md | 70 +++++++++++++++++++ 04_distributed_filesystem/Vagrantfile | 62 ++++++++++++++++ 04_distributed_filesystem/ansible.cfg | 7 ++ 04_distributed_filesystem/ansible_hosts | 6 ++ .../playbooks/01-master-conf.yml | 14 ++++ .../playbooks/02-node-conf.yml | 22 ++++++ .../scripts/configuration.sh | 3 + .../scripts/glusterfs.sh | 14 ++++ 9 files changed, 212 insertions(+) create mode 100644 04_distributed_filesystem/.gitignore create mode 100644 04_distributed_filesystem/README.md create mode 100644 04_distributed_filesystem/Vagrantfile create mode 100644 04_distributed_filesystem/ansible.cfg create mode 100644 04_distributed_filesystem/ansible_hosts create mode 100644 04_distributed_filesystem/playbooks/01-master-conf.yml create mode 100644 04_distributed_filesystem/playbooks/02-node-conf.yml create mode 100644 04_distributed_filesystem/scripts/configuration.sh create mode 100644 04_distributed_filesystem/scripts/glusterfs.sh diff --git a/04_distributed_filesystem/.gitignore b/04_distributed_filesystem/.gitignore new file mode 100644 index 0000000..055410e --- /dev/null +++ b/04_distributed_filesystem/.gitignore @@ -0,0 +1,14 @@ +# Created by https://www.toptal.com/developers/gitignore/api/vagrant +# Edit at https://www.toptal.com/developers/gitignore?templates=vagrant + +### Vagrant ### +# General +.vagrant/ + +# Log files (if you are creating logs in debug mode, uncomment this) +# *.log + +### Vagrant Patch ### +*.box + +# End of https://www.toptal.com/developers/gitignore/api/vagrant diff --git a/04_distributed_filesystem/README.md b/04_distributed_filesystem/README.md new file mode 100644 index 0000000..652f5de --- /dev/null +++ b/04_distributed_filesystem/README.md @@ -0,0 +1,70 @@ +# Distributed File System (With Glusterfs) + +![alt text](https://docs.gluster.org/en/v3/images/640px-GlusterFS_Architecture.png "gluster") +> Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace. + +## Volumes + +- Distributed Glusterfs Volume +![img](https://cloud.githubusercontent.com/assets/10970993/7412364/ac0a300c-ef5f-11e4-8599-e7d06de1165c.png) + +- Replicated Glusterfs Voume +![img2](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png) + +- Striped Glusterfs Volume +![img3](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png) + +### Glusterfs (Inicialización) + +Into master node +``` +$ sudo gluster peer probe node-1 +$ sudo gluster peer probe node-2 +$ gluster pool list +$ sudo gluster volume create gv0 replica 3 master:/gluster/data/gv0 node-1:/gluster/data/gv0 node-2:/gluster/data/gv0 +$ sudo gluster volume set gv0 auth.allow 127.0.0.1 +$ sudo gluster volume start gv0 +``` + +Each node +``` +$ sudo mount.glusterfs localhost:/gv0 /mnt +``` + +Para añadir un nuevo servidor + +| Command | Description | +|---|---| +| gluster peer status | Consulte el estado del cluster | +| gluster peer probe node4 | Adicione el nuevo nodo | +| gluster volume status | Anote el nombre del volumen | +| gluster volume add-brick swarm-vols replica 5 node4:/gluster/data/swarm-vols | TODO: Verificar este comando | + +Para remover un nodo del cluster se requiere primero remover sus bricks de los volumenes asociados + +| Command | Description | +|---|---| +| gluster volume info | Consulte los identificadores de los bricks actuales | +| gluster volume remove-brick swarm-vols replica 1 node1:/gluster/data force | Remueve un brick de un volumen con dos replicas | +| gluster peer detach node1 | Remueve un nodo del cluster | + +Eliminar un volumen + +| Command | Description | +|---|---| +| gluster volume stop swarm-vols | Detenga el volumen | +| gluster volume delete swarm-vols | Elimine el volumen | + + +### References +* https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/ +* https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-glusterfs-array/ +* https://support.rackspace.com/how-to/add-and-remove-glusterfs-servers/ +* http://embaby.com/blog/using-glusterfs-docker-swarm-cluster/ +* https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/ +* http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html +* https://www.cyberciti.biz/faq/linux-how-to-delete-a-partition-with-fdisk-command/ +* https://support.rackspace.com/how-to/getting-started-with-glusterfs-considerations-and-installation/ +* https://everythingshouldbevirtual.com/virtualization/vagrant-adding-a-second-hard-drive/ +* https://www.jamescoyle.net/how-to/351-share-glusterfs-volume-to-a-single-ip-address + diff --git a/04_distributed_filesystem/Vagrantfile b/04_distributed_filesystem/Vagrantfile new file mode 100644 index 0000000..f0ebbd2 --- /dev/null +++ b/04_distributed_filesystem/Vagrantfile @@ -0,0 +1,62 @@ +# -*- mode: ruby -*- +# vi: set ft=ruby : + +# All Vagrant configuration is done below. The "2" in Vagrant.configure +# configures the configuration version (we support older styles for +# backwards compatibility). Please don't change it unless you know what +# you're doing. + +firstDisk = './firstDisk.vdi' +secondDisk = './secondDisk.vdi' +thirdDisk = './thirdDisk.vdi' +fourthDisk = './fourthDisk.vdi' +Vagrant.configure("2") do |config| + + config.ssh.insert_key = false + config.vm.define "node_master" do |lb| + lb.vm.box = "generic/centos9s" + lb.vm.hostname = "master" + lb.vm.network "private_network", ip: "192.168.56.200" + lb.vm.provider "virtualbox" do |vb| + vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node_master"] + unless File.exist?(firstDisk) + vb.customize ['createhd', '--filename', firstDisk, '--variant', 'Fixed', '--size', 5 * 1024] + end + vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', firstDisk] + end + lb.vm.provision "shell", path: "scripts/glusterfs.sh" + lb.vm.provision "shell", path: "scripts/configuration.sh" + end + + config.vm.define "node1" do |node1| + node1.vm.box = "generic/centos9s" + node1.vm.hostname = "node-1" + node1.vm.network "private_network", ip: "192.168.56.11" + node1.vm.provider "virtualbox" do |vb| + vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-1"] + unless File.exist?(secondDisk) + vb.customize ['createhd', '--filename', secondDisk, '--variant', 'Fixed', '--size', 5 * 1024] + end + vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', secondDisk] + end + node1.vm.provision "shell", path: "scripts/glusterfs.sh" + node1.vm.provision "shell", path: "scripts/configuration.sh" + end + + config.vm.define "node2" do |node2| + node2.vm.box = "generic/centos9s" + node2.vm.hostname = "node-2" + node2.vm.network "private_network", ip: "192.168.56.12" + node2.vm.provider "virtualbox" do |vb| + vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-2"] + unless File.exist?(thirdDisk) + vb.customize ['createhd', '--filename', thirdDisk, '--variant', 'Fixed', '--size', 5 * 1024] + end + vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', thirdDisk] + end + node2.vm.provision "shell", path: "scripts/glusterfs.sh" + node2.vm.provision "shell", path: "scripts/configuration.sh" + end + +end + diff --git a/04_distributed_filesystem/ansible.cfg b/04_distributed_filesystem/ansible.cfg new file mode 100644 index 0000000..0a86b33 --- /dev/null +++ b/04_distributed_filesystem/ansible.cfg @@ -0,0 +1,7 @@ +[defaults] +inventory=./ansible_hosts +remote_user=vagrant +private_key_file=$HOME/.vagrant.d/insecure_private_key +host_key_checking=False +retry_files_enabled=False +#interpreter_python=auto_silent diff --git a/04_distributed_filesystem/ansible_hosts b/04_distributed_filesystem/ansible_hosts new file mode 100644 index 0000000..439a3cd --- /dev/null +++ b/04_distributed_filesystem/ansible_hosts @@ -0,0 +1,6 @@ +[nodes] +node1 ansible_ssh_host=192.168.56.11 +node2 ansible_ssh_host=192.168.56.12 + +[masters] +master ansible_ssh_host=192.168.56.200 diff --git a/04_distributed_filesystem/playbooks/01-master-conf.yml b/04_distributed_filesystem/playbooks/01-master-conf.yml new file mode 100644 index 0000000..4f5b82e --- /dev/null +++ b/04_distributed_filesystem/playbooks/01-master-conf.yml @@ -0,0 +1,14 @@ +--- +- hosts: master + become: true + tasks: + - name: Iniciar el servicio + systemd: + name: glusterd + state: started + - name: Add node1 and node2 to the master + shell: sudo gluster peer probe node1 + shell: sudo gluster peer probe node2 + - name: Create the volume and start it + shell: sudo gluster volume create gv0 replica 3 master:/data node1:/data node2:/data + shell: sudo gluster volume start gv0 diff --git a/04_distributed_filesystem/playbooks/02-node-conf.yml b/04_distributed_filesystem/playbooks/02-node-conf.yml new file mode 100644 index 0000000..4cacf38 --- /dev/null +++ b/04_distributed_filesystem/playbooks/02-node-conf.yml @@ -0,0 +1,22 @@ +--- +- hosts: nodes + become: true + tasks: + - name: Add node1 and node2 + shell: sudo gluster peer probe node1 + shell: sudo gluster peer probe node2 + - name: Montar volumen GlusterFS + mount: + name: /mnt + fstype: glusterfs + opts: _netdev,defaults + src: "localhost:/gv0" + state: mounted +- hosts: node1 + tasks: + - name: Create file + shell: echo "Hola desde node1" > /mnt/saludo-node1.txt +- hosts: node2 + tasks: + - name: Create file + shell: echo "Hola desde node2" > /mnt/saludo-node2.txt diff --git a/04_distributed_filesystem/scripts/configuration.sh b/04_distributed_filesystem/scripts/configuration.sh new file mode 100644 index 0000000..6d318fd --- /dev/null +++ b/04_distributed_filesystem/scripts/configuration.sh @@ -0,0 +1,3 @@ +echo "192.168.56.200 master" >> /etc/hosts +echo "192.168.56.11 node1" >> /etc/hosts +echo "192.168.56.12 node2" >> /etc/hosts diff --git a/04_distributed_filesystem/scripts/glusterfs.sh b/04_distributed_filesystem/scripts/glusterfs.sh new file mode 100644 index 0000000..0b5de03 --- /dev/null +++ b/04_distributed_filesystem/scripts/glusterfs.sh @@ -0,0 +1,14 @@ +yum install -y centos-release-gluster +yum install -y glusterfs-server +# yum install -y xfsprogs +service glusterd start + +sfdisk /dev/sdb << EOF +; +EOF + +mkfs.xfs /dev/sdb1 +mkdir -p /gluster/data +# echo "/dev/sdb1 /gluster/data xfs default 1 2" >> /etc/fstab +mount /dev/sdb1 /gluster/data/ +#mount -a && mount From df79b664c1bd94a910999381d2872ee9f186c1c3 Mon Sep 17 00:00:00 2001 From: Valentina Arias Parra <54784992+ValeArias07@users.noreply.github.com> Date: Wed, 1 Mar 2023 16:47:40 -0500 Subject: [PATCH 2/5] separe the shell tasks --- .../playbooks/01-master-conf.yml | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/04_distributed_filesystem/playbooks/01-master-conf.yml b/04_distributed_filesystem/playbooks/01-master-conf.yml index 4f5b82e..1dcf53c 100644 --- a/04_distributed_filesystem/playbooks/01-master-conf.yml +++ b/04_distributed_filesystem/playbooks/01-master-conf.yml @@ -2,13 +2,20 @@ - hosts: master become: true tasks: - - name: Iniciar el servicio + - name: Start the service systemd: name: glusterd state: started - - name: Add node1 and node2 to the master + + - name: Add node1 shell: sudo gluster peer probe node1 + + - name: Add node2 shell: sudo gluster peer probe node2 - - name: Create the volume and start it - shell: sudo gluster volume create gv0 replica 3 master:/data node1:/data node2:/data + + - name: Create the volume + shell: sudo gluster volume create gv0 replica 3 master:/data node1:/data node2:/data force + + - name: Start it shell: sudo gluster volume start gv0 + From 14dfde6654c984de5d99b9a8f2ae0ce3ea80ebc2 Mon Sep 17 00:00:00 2001 From: Valentina Arias Parra <54784992+ValeArias07@users.noreply.github.com> Date: Wed, 1 Mar 2023 16:48:26 -0500 Subject: [PATCH 3/5] adds the command tee --- 04_distributed_filesystem/playbooks/02-node-conf.yml | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/04_distributed_filesystem/playbooks/02-node-conf.yml b/04_distributed_filesystem/playbooks/02-node-conf.yml index 4cacf38..bcbe667 100644 --- a/04_distributed_filesystem/playbooks/02-node-conf.yml +++ b/04_distributed_filesystem/playbooks/02-node-conf.yml @@ -2,9 +2,6 @@ - hosts: nodes become: true tasks: - - name: Add node1 and node2 - shell: sudo gluster peer probe node1 - shell: sudo gluster peer probe node2 - name: Montar volumen GlusterFS mount: name: /mnt @@ -15,8 +12,8 @@ - hosts: node1 tasks: - name: Create file - shell: echo "Hola desde node1" > /mnt/saludo-node1.txt + shell: echo "Hola desde node1" | sudo tee /mnt/saludo-node1.txt - hosts: node2 tasks: - name: Create file - shell: echo "Hola desde node2" > /mnt/saludo-node2.txt + shell: echo "Hola desde node2" | sudo tee /mnt/saludo-node2.txt From 338ae5c5ebd0e6b21e3c7b2ac445092e848196bc Mon Sep 17 00:00:00 2001 From: Valentina Arias Parra <54784992+ValeArias07@users.noreply.github.com> Date: Thu, 2 Mar 2023 00:35:20 -0500 Subject: [PATCH 4/5] Update README.md --- README.md | 33 +++++++++++++++++++++++++-------- 1 file changed, 25 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index c7e8ba0..13364e7 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,28 @@ # sd-workshop2 2022-1 sd workshop2 -- Completar la lógica de la applicación de pagos de tal manera que al hacer un pago através del microservicio de pagos, el monto de las facturas sea correctamente debitado, es decir, actualmente si una factura debe 1000 y yo hago un pago por 400 a esa factura, el microservicio invoice me sobreescribe el 1000 por 400 en vez de mostrarme el saldo restante 1000-400=600. -- Completar la lógica de la aplicación de tal manera que haya 3 estados para las facturas. 0=debe 1=pagadoparcialmente 2=pagado -- Hacer que las applicaciones se puedan registrar con consul -- Debe ser un pull request a este repositorio sd-workshop2 - -Bonus: -- Subir las imagenes de la app a Docker hub -- Crear un script en bash que lance toda la aplicación. +En este workshop, se automatiza la configuracion de un servidor master y dos nodos de Gluster. Para esto, se utiliza la herramienta de Ansible para configurar las maquinas virtuales. + +## 01-master-conf.yml +En primer lugar, se inicia el servicio de glusterd. Despues, secrea el archivo de configuracion del servidor master. En este archivo se establece la conexion con los nodos (1 y 2) usando un codigo en shell que realiza la instruccion de "gluster peer probe". + +Una vez establecida la conexion con los nodos, se crea un volumen llamado "gv0" el cual se va a replicar en los 3 nodos (master, node1 y node2). Es decir, que la carpeta data va a tener los datos compartidos del almacenamiento distribuido. + +Finalmente, se inicializa el volumen. + +## 02-node-conf.yml +Para la configuracion de los nodos, se monta el volumen gv0 creado en el master en la carpeta mnt. Despues, se configura que para cada nodo se cree un archivo de texto. + +Para probar si esta funcionando el gluster, simplemente se accede a la carpeta de data y ahi se pueden encontrar ambos archivos. :) + +##¿Cómo probar? +Para probar la configuracion realizada en Ansible, simplemente se realiza: + +$ vagrant up +$ ansible-playbook ./playbooks/01-master-conf.yml +$ ansible-playbook ./playbooks/02-node-conf.yml +$ vagrant ssh node1 +$ cd /data/ | ls + +Con esto podemos ver que se crearon ambos archivos y estos se replicaron en el directorio de /data + From c1328bf2969a7a52e8b306b1b22209cbba526ea1 Mon Sep 17 00:00:00 2001 From: Valentina Arias Parra <54784992+ValeArias07@users.noreply.github.com> Date: Thu, 2 Mar 2023 00:36:22 -0500 Subject: [PATCH 5/5] Update README.md --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 13364e7..c0a8951 100644 --- a/README.md +++ b/README.md @@ -18,11 +18,13 @@ Para probar si esta funcionando el gluster, simplemente se accede a la carpeta d ##¿Cómo probar? Para probar la configuracion realizada en Ansible, simplemente se realiza: +```` $ vagrant up -$ ansible-playbook ./playbooks/01-master-conf.yml +$ iansible-playbook ./playbooks/01-master-conf.yml $ ansible-playbook ./playbooks/02-node-conf.yml $ vagrant ssh node1 $ cd /data/ | ls +```` Con esto podemos ver que se crearon ambos archivos y estos se replicaron en el directorio de /data