Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
db83c21
Getting Lamoid Started
driftingaway86 Oct 20, 2021
f06ef13
Framework for Process/Config Manager
driftingaway86 Oct 20, 2021
63e1c7e
Slightly Better Error Management
driftingaway86 Oct 20, 2021
024602e
StartReflector Update
driftingaway86 Oct 21, 2021
fb0d64d
Turns out PIDs are ints ¯\_(ツ)_/¯
driftingaway86 Oct 21, 2021
1f01e82
Starting on registration
driftingaway86 Oct 21, 2021
b2a8885
GrazeAnatomy about a Alpaca & LLama doin stuff
driftingaway86 Oct 21, 2021
c68c3de
URI Validation on server URL
driftingaway86 Oct 21, 2021
02f8ed9
Wakaflaka-Alpaca.....
driftingaway86 Oct 21, 2021
199ed5c
Trix are for kids....
driftingaway86 Oct 21, 2021
5b9d748
Tagging cleanup
driftingaway86 Oct 21, 2021
1bb8d0c
YAML Gen Prep
driftingaway86 Oct 21, 2021
7fa3927
Making YAML part 1
driftingaway86 Oct 21, 2021
6f4cb95
YAML Prototype
driftingaway86 Oct 21, 2021
01be559
Start Collector
driftingaway86 Oct 22, 2021
98c48ae
documentation is key.....
driftingaway86 Oct 22, 2021
a4db6ec
Dupe for testing.
driftingaway86 Oct 22, 2021
9a03ea1
Killing the Server testing, yaml errors dont merge
driftingaway86 Oct 23, 2021
b194935
updates
driftingaway86 Oct 24, 2021
836bb22
Trying to find a better way to compare, inf loop
driftingaway86 Oct 24, 2021
bd32eb4
Testing morecompare options, process man good
driftingaway86 Oct 25, 2021
e732861
More testing
driftingaway86 Oct 25, 2021
7371725
Well this works.
driftingaway86 Oct 26, 2021
ff818a5
Clean up tmp config, todos, and what nots...
driftingaway86 Oct 26, 2021
8e676a6
Good House Keeping Awards
driftingaway86 Oct 26, 2021
f3b530a
TODOs and Typos
driftingaway86 Oct 27, 2021
6703d24
Working
driftingaway86 Oct 29, 2021
d4c408e
built not bought....
driftingaway86 Oct 29, 2021
13bb476
Get Ya Graze On...
driftingaway86 Oct 29, 2021
54ce6f8
x86 Bin
driftingaway86 Nov 2, 2021
8810886
Added Makefile to keep builds consistent
driftingaway86 Nov 3, 2021
42e978f
Error Handeling and Timeeouts from LLama pt1
driftingaway86 Nov 23, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The added code here allows the deployment of this tool to be a little more dynam
![enter image description here](https://github.com/scline/llama-sd/blob/master/docs/001.gif)

## What is LLAMA?
LLAMA (Loss and LAtency MAtrix) is a library for testing and measuring network loss and latency between distributed endpoints.
LLAMA (Loss and Latency Matrix) is a library for testing and measuring network loss and latency between distributed endpoints.

It does this by sending UDP datagrams/probes from collectors to reflectors and measuring how long it takes for them to return, if they return at all. UDP is used to provide ECMP hashing over multiple paths (a win over ICMP) without the need for setup/teardown and per-packet granularity (a win over TCP).

Expand All @@ -15,14 +15,14 @@ This was developed and created by DropBox: [Github Project](https://github.com/d

## Components
### LLAMA-SERVER
The server component is a basic Python3 Flask application serving API endpoints. Its primary function is to accept registration messages in JSON from remote clients and group/present the hosts in a formate LLAMA Collectors understand.
The server component is a basic Python3 Flask application serving API endpoints. Its primary function is to accept registration messages in JSON from remote clients and group/present the hosts in a format LLAMA Collectors understand.

#### Script Arguments and Environment Variables
- `-c, --config, APP_CONFIG` - [Configuration file](https://github.com/scline/llama-sd/blob/master/llama-server/src/config.yml) path
- `-g, --group, APP_GROUP` - Default group probes will be assigned if none is given. Probe settings will overwrite this value.
- `-i, --host, APP_HOST` - Server IP to listen for web traffic, 0.0.0.0 is all available IP's. Defaults to 127.0.0.1 if not set.
- `-k, --keepalive, APP_KEEPALIVE` - Keepalive settings, server will remove probe entries if they do not check in within this window value in seconds. This value is used if probes do not give one.
- `-p, --port, APP_PORT` - Port webserver listens on. Defaults to 5000 if not set.
- `-p, --port, APP_PORT` - Port web server listens on. Defaults to 5000 if not set.
- `-v, --verbose, APP_VERBOSE` - Enable debug logging

### LLAMA- SCRAPER
Expand Down Expand Up @@ -51,18 +51,18 @@ Example of what one of these payloads looks like
]
```
#### Environment Variables
- `INFLUXDB_HOST` - The IP or hostname of the influxDB to store metrics, using version 1.8 is recomended.
- `INFLUXDB_HOST` - The IP or hostname of the influxDB to store metrics, using version 1.8 is recommended.
- `INFLUXDB_NAME` - InfluxDB name where data is stored.
- `INFLUXDB_PORT` - InfluxDB listening port
- `LLAMA_SERVER` - URL of LLAMA Server endpoint for gathering host list. i.e. `http://llama.somehost.com:8081`

#### Groups
You can have miltiple groupts of probes to one server. Assigning a group name of `BareMetal` vs `WAN` for example. All nodes in the WAN group will full-mesh test against each other while the `BareMetal` group will do the same for probes registered as such. This allows segmentation and future scaling considerations.
You can have multiple groups of probes to one server. Assigning a group name of `BareMetal` vs `WAN` for example. All nodes in the WAN group will perform a full-mesh test against each other while the `BareMetal` group will do the same for probes registered as such. This allows segmentation and future scaling considerations.
![enter image description here](https://github.com/scline/llama-sd/blob/master/docs/groups.png)

#### Script Arguments and Environment Variables
- `-c, --config, APP_CONFIG` - [Configuration file](https://github.com/scline/llama-sd/blob/master/llama-client/src/config.yml) path
- `-g, --group, LLAMA_GROUP` - Group the probe will be assinged to.
- `-g, --group, LLAMA_GROUP` - Group the probe will be assigned to.
- `-i, --ip, LLAMA_SOURCE_IP` - Optional, if the client wants to tell the server what the probe IP is. By default the server will grab this information from the API call. This option is required if running servers and clients on the same host (docker IP mess).
- `-k, --keepalive, LLAMA_KEEPALIVE` - Keepalive settings, server will remove probe entries if they do not check in within this window value in seconds.
- `-s, --server, LLAMA_SERVER` - URL of LLAMA Server endpoint for gathering host list. i.e. `http://llama.somehost.com:8081`
Expand All @@ -79,7 +79,7 @@ Docker container that contains two LLAMA components created by Dropbox. LLAMA-Re
- `PROBE_NAME` - Generally a hostname that is tagged on metrics
- `PROVE_SHORTNAME` - Shorter name (i.e. pdx1 for a datacenter in Portland or usw2_1 for an AWS location)

## Instalation
## Installation
Installation via Docker containers is going to be the simplest way. This will work for x86 or ARM-based systems like the Raspberry Pi.

### Copy-Paste Probe install (Linux)
Expand Down Expand Up @@ -111,7 +111,7 @@ smcline06/llama-probe:arm7-latest
## Network Requirements
Probes are hardcoded to use TCP and UDP port 8100 for communication. In the future, this will be configurable. If deploying this behind a NAT, for example, within a SOHO environment, then you will need to set up destination ports accordingly on your home router.

| Source | Destination | Destination Port | Protocal
| Source | Destination | Destination Port | Protocol
|--|--|--|--|
| 0.0.0.0/0 (Internet) | Public IP/Interface |8100 | TCP + UDP|

Expand Down
12 changes: 2 additions & 10 deletions build.sh
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -19,21 +19,13 @@ version=`cat $PWD/llama-server/version`
docker build $PWD/llama-server -t smcline06/llama-server:${tag}${version}
docker build $PWD/llama-server -t smcline06/llama-server:${tag}latest

docker push smcline06/llama-server:${tag}${version}
docker push smcline06/llama-server:${tag}latest

# Build scraper
version=`cat $PWD/llama-scraper/version`
docker build $PWD/llama-scraper -t smcline06/llama-scraper:${tag}${version}
docker build $PWD/llama-scraper -t smcline06/llama-scraper:${tag}latest

docker push smcline06/llama-scraper:${tag}${version}
docker push smcline06/llama-scraper:${tag}latest

# Build probe
version=`cat $PWD/llama-probe/version`
make -C ./llama-probe/lamoid build-lamoid
docker build $PWD/llama-probe -t smcline06/llama-probe:${tag}${version}
docker build $PWD/llama-probe -t smcline06/llama-probe:${tag}latest

docker push smcline06/llama-probe:${tag}${version}
docker push smcline06/llama-probe:${tag}latest
docker build $PWD/llama-probe -t smcline06/llama-probe:${tag}latest
5 changes: 2 additions & 3 deletions llama-probe/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@ FROM golang:1.13

WORKDIR /go/src/app

copy entrypoint.sh entrypoint.sh
copy register.go register.go
COPY lamoid-grazer /usr/local/bin/lamoid-grazer

RUN go get -d -v github.com/dropbox/llama
RUN go install -v github.com/dropbox/llama/cmd/collector
Expand All @@ -22,4 +21,4 @@ ENV \
EXPOSE 8100/tcp
EXPOSE 8100/udp

CMD ["bash", "-c", "bash entrypoint.sh"]
ENTRYPOINT [ "lamoid-grazer" ]
Binary file added llama-probe/lamoid-grazer
Binary file not shown.
2 changes: 2 additions & 0 deletions llama-probe/lamoid/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
build-lamoid:
GOOS=linux GOARCH=amd64 go build -o ../lamoid-grazer
Loading