Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Created by https://www.toptal.com/developers/gitignore/api/vagrant
# Edit at https://www.toptal.com/developers/gitignore?templates=vagrant

### Vagrant ###
# General
.vagrant/

# Log files (if you are creating logs in debug mode, uncomment this)
# *.log

### Vagrant Patch ###
*.box

# End of https://www.toptal.com/developers/gitignore/api/vagrant
89 changes: 80 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,82 @@
# sd-workshop2 2022-1
sd workshop2
Para ejecutar el escenario del filesystem mediante un gluster (1 nodo master y 2 nodos):
- Damos permisos de ejecución al script init_distributed_filesystem: chmod +x init_distributed_filesystem.sh
- Ejecutamos el script ./init_distributed_filesystem
Si aparece un error similar a:
"The IP address configured for the host-only network is not within the allowed ranges.
Please update the address used to be within the allowed ranges and run the command again."
Crea un nuevo archivo en /etc/vbox/networks.conf con el siguiente contenido:

- Completar la lógica de la applicación de pagos de tal manera que al hacer un pago através del microservicio de pagos, el monto de las facturas sea correctamente debitado, es decir, actualmente si una factura debe 1000 y yo hago un pago por 400 a esa factura, el microservicio invoice me sobreescribe el 1000 por 400 en vez de mostrarme el saldo restante 1000-400=600.
- Completar la lógica de la aplicación de tal manera que haya 3 estados para las facturas. 0=debe 1=pagadoparcialmente 2=pagado
- Hacer que las applicaciones se puedan registrar con consul
- Debe ser un pull request a este repositorio sd-workshop2
\* 10.0.0.0/8 192.168.0.0/16

\* 2001::/64

# Distributed File System (With Glusterfs)

![alt text](https://docs.gluster.org/en/v3/images/640px-GlusterFS_Architecture.png "gluster")
> Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.

## Volumes

- Distributed Glusterfs Volume
![img](https://cloud.githubusercontent.com/assets/10970993/7412364/ac0a300c-ef5f-11e4-8599-e7d06de1165c.png)

- Replicated Glusterfs Voume
![img2](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png)

- Striped Glusterfs Volume
![img3](https://cloud.githubusercontent.com/assets/10970993/7412379/d75272a6-ef5f-11e4-869a-c355e8505747.png)

### Glusterfs (Inicialización)

Into master node
```
$ sudo gluster peer probe node-1
$ sudo gluster peer probe node-2
$ gluster pool list
$ sudo gluster volume create gv0 replica 3 master:/gluster/data/gv0 node-1:/gluster/data/gv0 node-2:/gluster/data/gv0
$ sudo gluster volume set gv0 auth.allow 127.0.0.1
$ sudo gluster volume start gv0
```

Each node
```
$ sudo mount.glusterfs localhost:/gv0 /mnt
```

Para añadir un nuevo servidor

| Command | Description |
|---|---|
| gluster peer status | Consulte el estado del cluster |
| gluster peer probe node4 | Adicione el nuevo nodo |
| gluster volume status | Anote el nombre del volumen |
| gluster volume add-brick swarm-vols replica 5 node4:/gluster/data/swarm-vols | TODO: Verificar este comando |

Para remover un nodo del cluster se requiere primero remover sus bricks de los volumenes asociados

| Command | Description |
|---|---|
| gluster volume info | Consulte los identificadores de los bricks actuales |
| gluster volume remove-brick swarm-vols replica 1 node1:/gluster/data force | Remueve un brick de un volumen con dos replicas |
| gluster peer detach node1 | Remueve un nodo del cluster |

Eliminar un volumen

| Command | Description |
|---|---|
| gluster volume stop swarm-vols | Detenga el volumen |
| gluster volume delete swarm-vols | Elimine el volumen |


### References
* https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/
* https://support.rackspace.com/how-to/recover-from-a-failed-server-in-a-glusterfs-array/
* https://support.rackspace.com/how-to/add-and-remove-glusterfs-servers/
* http://embaby.com/blog/using-glusterfs-docker-swarm-cluster/
* https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
* http://ask.xmodulo.com/create-mount-xfs-file-system-linux.html
* https://www.cyberciti.biz/faq/linux-how-to-delete-a-partition-with-fdisk-command/
* https://support.rackspace.com/how-to/getting-started-with-glusterfs-considerations-and-installation/
* https://everythingshouldbevirtual.com/virtualization/vagrant-adding-a-second-hard-drive/
* https://www.jamescoyle.net/how-to/351-share-glusterfs-volume-to-a-single-ip-address

Bonus:
- Subir las imagenes de la app a Docker hub
- Crear un script en bash que lance toda la aplicación.
62 changes: 62 additions & 0 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.

firstDisk = './firstDisk.vdi'
secondDisk = './secondDisk.vdi'
thirdDisk = './thirdDisk.vdi'
fourthDisk = './fourthDisk.vdi'
Vagrant.configure("2") do |config|

config.ssh.insert_key = false
config.vm.define "nodemaster" do |lb|
lb.vm.box = "centos/7"
lb.vm.hostname = "nodemaster"
lb.vm.network "private_network", ip: "192.168.69.10"
lb.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "nodemaster"]
unless File.exist?(firstDisk)
vb.customize ['createhd', '--filename', firstDisk, '--variant', 'Fixed', '--size', 5 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', firstDisk]
end
lb.vm.provision "shell", path: "scripts/glusterfs.sh"
lb.vm.provision "shell", path: "scripts/configuration.sh"
end

config.vm.define "node1" do |node1|
node1.vm.box = "centos/7"
node1.vm.hostname = "node-1"
node1.vm.network "private_network", ip: "192.168.69.100"
node1.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-1"]
unless File.exist?(secondDisk)
vb.customize ['createhd', '--filename', secondDisk, '--variant', 'Fixed', '--size', 5 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', secondDisk]
end
node1.vm.provision "shell", path: "scripts/glusterfs.sh"
node1.vm.provision "shell", path: "scripts/configuration.sh"
end

config.vm.define "node2" do |node2|
node2.vm.box = "centos/7"
node2.vm.hostname = "node-2"
node2.vm.network "private_network", ip: "192.168.69.101"
node2.vm.provider "virtualbox" do |vb|
vb.customize ["modifyvm", :id, "--memory", "512", "--cpus", "1", "--name", "node-2"]
unless File.exist?(thirdDisk)
vb.customize ['createhd', '--filename', thirdDisk, '--variant', 'Fixed', '--size', 5 * 1024]
end
vb.customize ['storageattach', :id, '--storagectl', 'IDE', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', thirdDisk]
end
node2.vm.provision "shell", path: "scripts/glusterfs.sh"
node2.vm.provision "shell", path: "scripts/configuration.sh"
end

end

7 changes: 7 additions & 0 deletions ansible.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
[defaults]
inventory=./ansible_hosts
remote_user=vagrant
private_key_file=$HOME/.vagrant.d/insecure_private_key
host_key_checking=False
retry_files_enabled=False
#interpreter_python=auto_silent
6 changes: 6 additions & 0 deletions ansible_hosts
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[nodes]
node1 ansible_ssh_host=192.168.69.100
node2 ansible_ssh_host=192.168.69.101

[masters]
master ansible_ssh_host=192.168.69.10
5 changes: 5 additions & 0 deletions init_distributed_filesystem.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
#!/bin/bash

vagrant up
ansible-playbook playbooks/nodemaster-conf.yml
ansible-playbook playbooks/nodes-conf.yml
21 changes: 21 additions & 0 deletions playbooks/nodemaster-conf.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
- hosts: master
become: true
tasks:
- name: Start gluster
systemd:
name: glusterd
state: started

- name: Peer every node
gluster_peer:
state: present
nodes:
- node1
- node2

- name: Create the volume
shell: sudo gluster volume create gv0 replica 3 master:/data node1:/data node2:/data force

- name: Start volume
shell: sudo gluster volume start gv0
19 changes: 19 additions & 0 deletions playbooks/nodes-conf.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
- hosts: nodes
become: true
tasks:
- name: Mount GlusterFS
mount:
name: /mnt
fstype: glusterfs
opts: _netdev,defaults
src: "localhost:/gv0"
state: mounted
- hosts: node1
tasks:
- name: Create file
shell: echo "Hello world!" | sudo tee /mnt/helloworld-node1.txt
- hosts: node2
tasks:
- name: Create file
shell: echo "Hello world!" | sudo tee /mnt/helloworld-node2.txt
3 changes: 3 additions & 0 deletions scripts/configuration.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
echo "192.168.69.10 master" >> /etc/hosts
echo "192.168.69.100 node1" >> /etc/hosts
echo "192.168.69.101 node2" >> /etc/hosts
14 changes: 14 additions & 0 deletions scripts/glusterfs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
yum install -y centos-release-gluster
yum install -y glusterfs-server
# yum install -y xfsprogs
service glusterd start

sfdisk /dev/sdb << EOF
;
EOF

mkfs.xfs /dev/sdb1
mkdir -p /gluster/data
# echo "/dev/sdb1 /gluster/data xfs default 1 2" >> /etc/fstab
mount /dev/sdb1 /gluster/data/
#mount -a && mount