diff --git a/README.md b/README.md index 815dbc9..62a6e3b 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ The added code here allows the deployment of this tool to be a little more dynam ![enter image description here](https://github.com/scline/llama-sd/blob/master/docs/001.gif) ## What is LLAMA? -LLAMA (Loss and LAtency MAtrix) is a library for testing and measuring network loss and latency between distributed endpoints. +LLAMA (Loss and Latency Matrix) is a library for testing and measuring network loss and latency between distributed endpoints. It does this by sending UDP datagrams/probes from collectors to reflectors and measuring how long it takes for them to return, if they return at all. UDP is used to provide ECMP hashing over multiple paths (a win over ICMP) without the need for setup/teardown and per-packet granularity (a win over TCP). @@ -15,14 +15,14 @@ This was developed and created by DropBox: [Github Project](https://github.com/d ## Components ### LLAMA-SERVER -The server component is a basic Python3 Flask application serving API endpoints. Its primary function is to accept registration messages in JSON from remote clients and group/present the hosts in a formate LLAMA Collectors understand. +The server component is a basic Python3 Flask application serving API endpoints. Its primary function is to accept registration messages in JSON from remote clients and group/present the hosts in a format LLAMA Collectors understand. #### Script Arguments and Environment Variables - `-c, --config, APP_CONFIG` - [Configuration file](https://github.com/scline/llama-sd/blob/master/llama-server/src/config.yml) path - `-g, --group, APP_GROUP` - Default group probes will be assigned if none is given. Probe settings will overwrite this value. - `-i, --host, APP_HOST` - Server IP to listen for web traffic, 0.0.0.0 is all available IP's. Defaults to 127.0.0.1 if not set. - `-k, --keepalive, APP_KEEPALIVE` - Keepalive settings, server will remove probe entries if they do not check in within this window value in seconds. This value is used if probes do not give one. -- `-p, --port, APP_PORT` - Port webserver listens on. Defaults to 5000 if not set. +- `-p, --port, APP_PORT` - Port web server listens on. Defaults to 5000 if not set. - `-v, --verbose, APP_VERBOSE` - Enable debug logging ### LLAMA- SCRAPER @@ -51,18 +51,18 @@ Example of what one of these payloads looks like ] ``` #### Environment Variables -- `INFLUXDB_HOST` - The IP or hostname of the influxDB to store metrics, using version 1.8 is recomended. +- `INFLUXDB_HOST` - The IP or hostname of the influxDB to store metrics, using version 1.8 is recommended. - `INFLUXDB_NAME` - InfluxDB name where data is stored. - `INFLUXDB_PORT` - InfluxDB listening port - `LLAMA_SERVER` - URL of LLAMA Server endpoint for gathering host list. i.e. `http://llama.somehost.com:8081` #### Groups -You can have miltiple groupts of probes to one server. Assigning a group name of `BareMetal` vs `WAN` for example. All nodes in the WAN group will full-mesh test against each other while the `BareMetal` group will do the same for probes registered as such. This allows segmentation and future scaling considerations. +You can have multiple groups of probes to one server. Assigning a group name of `BareMetal` vs `WAN` for example. All nodes in the WAN group will perform a full-mesh test against each other while the `BareMetal` group will do the same for probes registered as such. This allows segmentation and future scaling considerations. ![enter image description here](https://github.com/scline/llama-sd/blob/master/docs/groups.png) #### Script Arguments and Environment Variables - `-c, --config, APP_CONFIG` - [Configuration file](https://github.com/scline/llama-sd/blob/master/llama-client/src/config.yml) path -- `-g, --group, LLAMA_GROUP` - Group the probe will be assinged to. +- `-g, --group, LLAMA_GROUP` - Group the probe will be assigned to. - `-i, --ip, LLAMA_SOURCE_IP` - Optional, if the client wants to tell the server what the probe IP is. By default the server will grab this information from the API call. This option is required if running servers and clients on the same host (docker IP mess). - `-k, --keepalive, LLAMA_KEEPALIVE` - Keepalive settings, server will remove probe entries if they do not check in within this window value in seconds. - `-s, --server, LLAMA_SERVER` - URL of LLAMA Server endpoint for gathering host list. i.e. `http://llama.somehost.com:8081` @@ -79,7 +79,7 @@ Docker container that contains two LLAMA components created by Dropbox. LLAMA-Re - `PROBE_NAME` - Generally a hostname that is tagged on metrics - `PROVE_SHORTNAME` - Shorter name (i.e. pdx1 for a datacenter in Portland or usw2_1 for an AWS location) -## Instalation +## Installation Installation via Docker containers is going to be the simplest way. This will work for x86 or ARM-based systems like the Raspberry Pi. ### Copy-Paste Probe install (Linux) @@ -111,7 +111,7 @@ smcline06/llama-probe:arm7-latest ## Network Requirements Probes are hardcoded to use TCP and UDP port 8100 for communication. In the future, this will be configurable. If deploying this behind a NAT, for example, within a SOHO environment, then you will need to set up destination ports accordingly on your home router. -| Source | Destination | Destination Port | Protocal +| Source | Destination | Destination Port | Protocol |--|--|--|--| | 0.0.0.0/0 (Internet) | Public IP/Interface |8100 | TCP + UDP| diff --git a/build.sh b/build.sh old mode 100644 new mode 100755 index 5b46eb3..3fc663e --- a/build.sh +++ b/build.sh @@ -19,21 +19,13 @@ version=`cat $PWD/llama-server/version` docker build $PWD/llama-server -t smcline06/llama-server:${tag}${version} docker build $PWD/llama-server -t smcline06/llama-server:${tag}latest -docker push smcline06/llama-server:${tag}${version} -docker push smcline06/llama-server:${tag}latest - # Build scraper version=`cat $PWD/llama-scraper/version` docker build $PWD/llama-scraper -t smcline06/llama-scraper:${tag}${version} docker build $PWD/llama-scraper -t smcline06/llama-scraper:${tag}latest -docker push smcline06/llama-scraper:${tag}${version} -docker push smcline06/llama-scraper:${tag}latest - # Build probe version=`cat $PWD/llama-probe/version` +make -C ./llama-probe/lamoid build-lamoid docker build $PWD/llama-probe -t smcline06/llama-probe:${tag}${version} -docker build $PWD/llama-probe -t smcline06/llama-probe:${tag}latest - -docker push smcline06/llama-probe:${tag}${version} -docker push smcline06/llama-probe:${tag}latest +docker build $PWD/llama-probe -t smcline06/llama-probe:${tag}latest \ No newline at end of file diff --git a/llama-probe/Dockerfile b/llama-probe/Dockerfile index 5090eed..3840385 100644 --- a/llama-probe/Dockerfile +++ b/llama-probe/Dockerfile @@ -2,8 +2,7 @@ FROM golang:1.13 WORKDIR /go/src/app -copy entrypoint.sh entrypoint.sh -copy register.go register.go +COPY lamoid-grazer /usr/local/bin/lamoid-grazer RUN go get -d -v github.com/dropbox/llama RUN go install -v github.com/dropbox/llama/cmd/collector @@ -22,4 +21,4 @@ ENV \ EXPOSE 8100/tcp EXPOSE 8100/udp -CMD ["bash", "-c", "bash entrypoint.sh"] +ENTRYPOINT [ "lamoid-grazer" ] diff --git a/llama-probe/lamoid-grazer b/llama-probe/lamoid-grazer new file mode 100755 index 0000000..886e9c4 Binary files /dev/null and b/llama-probe/lamoid-grazer differ diff --git a/llama-probe/lamoid/Makefile b/llama-probe/lamoid/Makefile new file mode 100644 index 0000000..821bd4d --- /dev/null +++ b/llama-probe/lamoid/Makefile @@ -0,0 +1,2 @@ +build-lamoid: + GOOS=linux GOARCH=amd64 go build -o ../lamoid-grazer \ No newline at end of file diff --git a/llama-probe/lamoid/alpaca/graze.go b/llama-probe/lamoid/alpaca/graze.go new file mode 100644 index 0000000..09b038d --- /dev/null +++ b/llama-probe/lamoid/alpaca/graze.go @@ -0,0 +1,420 @@ +package alpaca + +import ( + "bytes" + "crypto/md5" + "encoding/json" + "fmt" + "io/ioutil" + "log" + "net/http" + "net/url" + "os" + "os/exec" + "syscall" + "time" + + "github.com/google/go-cmp/cmp" +) + +//TODO: Refactor HTTP Client usage +//TODO: Clean up functions that don't need to be a method and move them some place else. +//TODO: CLI Flag to control config check interval +//TODO: Unit Testing +//TODO: Documentation +//TODO: Be less comedic with the naming..... + +//GrazeAnatomy - A method called on LamoidEnv which registers the current running LLAMA configuration +//to the LLAMA-SERVER +func (g *LamoidEnv) GrazeAnatomy() error { + + log.Printf("[LAMOID-REGISTER]: Performing Registration with LLAMA Server %s", g.Server) + + //Build the registration Payload + lamoidAnatomy := &PayLoad{ + Port: g.Port, + Keepalive: g.KeepAlive, + Ip: g.SourceIP, + Group: g.Group, + } + + //TODO: Read in version number + lamoidAnatomy.Tags.Version = "0.1.0" + lamoidAnatomy.Tags.ProbeName = g.ProbeName + lamoidAnatomy.Tags.ProbeShortname = g.ProbeShortName + + byteArray, err := json.Marshal(lamoidAnatomy) + if err != nil { + log.Println(err) + } + + //Build and Validate the LLAMA-SERVER url + serverURL, err := url.ParseRequestURI(fmt.Sprintf("%sapi/v1/register", g.Server)) + if err != nil { + log.Printf("[URL-ERROR]: The url constructed was not a valid URI, check LLAMA_SERVER, %s", err) + return err + } + + //Build the HTTP Post request + request, err := http.NewRequest("POST", serverURL.String(), bytes.NewBuffer(byteArray)) + + if err != nil { + log.Printf("[LAMOID-REGISTER]: There was a problem creating a new request object, %s", err) + return err + } + + request.Header.Set("Content-Type", "application/json; charset=UTF-8") + + //HTTP Client + client := &http.Client{ + Timeout: 5 * time.Second, + } + + //Process HTTP request and log the status + response, err := client.Do(request) + + if err != nil { + log.Printf("[LAMOID-REGISTER]: There was a problem making a request, %s", err) + return err + } + + defer func() { + err := response.Body.Close() + + if err != nil { + log.Printf("[LAMOID-REGISTER]: There was a problem closing the response from LLAMA Server, %s", err) + } + }() + + log.Print("[LAMOID-REGISTER]: Regestiering Process Completed") + log.Printf("[LAMOID-REGISTER]: Response Status: %s", response.Status) + + return nil +} + +//StartReflector - A method called on LamoidEnv which starts the LLAMA Reflector application and updates LamoidEnv with +//its process. +func (g *LamoidEnv) StartReflector() { + + // Build os exec command to launch reflector with a given param + reflector := exec.Command("reflector", "-port", fmt.Sprint(g.Port)) + + // Set the process to output to Standard Out + reflector.Stdout = os.Stdout + reflector.Stderr = os.Stderr + + // Execute the exec command to start reflector, panic on error. + log.Print("[LAMOID]: Starting Reflector") + err := reflector.Start() + + if err != nil { + log.Printf("[LAMOID-REFLECTOR]: There was an error starting the reflector, %s", err) + } + + //Wait in go routine + go func() { + err = reflector.Wait() + if err != nil { + log.Printf("[ERROR]: Reflector processed didn't close gracfully") + } + }() + + log.Printf("[REFLECTOR-PID]: %v", reflector.Process.Pid) + g.Reflector = reflector +} + +//StartCollector - A method called on LamoidEnv which starts the LLAMA Collector application and updates LamoidEnv with +//its OS process identification (PID) +func (g *LamoidEnv) StartCollector() { + + // Build os exec command to launch colelctor with a given param + collector := exec.Command("collector", "-llama.config", "config.yaml") + + // Set the process to output to Standard Out + collector.Stdout = os.Stdout + collector.Stderr = os.Stderr + + // Execute the exec command to start colelctor, panic on error. + log.Print("[LAMOID]: Starting Collector") + err := collector.Start() + + if err != nil { + log.Printf("[LAMOID-COLLECTOR]: There was an error starting the collector, %s", err) + } + + //Wait in go routine + go func() { + err = collector.Wait() + if err != nil { + log.Printf("[ERROR]: Collector processed didn't close gracfully") + } + }() + + log.Printf("[COLLECTOR-PID]: %v", collector.Process.Pid) + g.Collector = collector +} + +//GrazeConfig - A method called on LamoidEnv which retrieves the running configuration from the LLAMA Servers configuration +//API and returns []byte object used to write the configuration to local node. Must be ran before the +//collector is started. +func (g *LamoidEnv) GrazeConfig() ([]byte, error) { + + // Build and validate URL + configReqURL, err := url.ParseRequestURI(fmt.Sprintf("%sapi/v1/config/%s", g.Server, g.Group)) + if err != nil { + log.Printf("[LAMOID-URL]: The url constructed was not a valid URI, check LLAMA_SERVER & LLAMA_GROUP , %s", err) + return nil, err + } + + // Build request + request, err := http.NewRequest("GET", configReqURL.String(), nil) + if err != nil { + log.Printf("[LAMOID-CLIENT]: There was a problem creating a new request object, %s", err) + return nil, err + } + + configReqQuery := request.URL.Query() + configReqQuery.Add("llamaport", fmt.Sprint(g.Port)) + request.URL.RawQuery = configReqQuery.Encode() + + //HTTP Client + client := &http.Client{ + Timeout: time.Second * 5, + } + + // Process HTTP request + response, err := client.Do(request) + if err != nil { + log.Printf("[LAMOID-CLIENT]: There was a problem making a request to LLAMA Server, %s", err) + return nil, err + } + + defer func() { + err := response.Body.Close() + + if err != nil { + log.Printf("[LAMOID-CLIENT]: There was a problem closing the config response from LLAMA Server, %s", err) + } + }() + + // Read response into bytes + respBytes, err := ioutil.ReadAll(response.Body) + if err != nil { + log.Printf("[LAMOID-CLIENT]: There was a problem reading the config response from LLAMA_SERVER, %s", err) + return nil, err + } + + return respBytes, nil + +} + +//WriteConfig - Accept []bytes that will be written to the local node as config.yaml +func (g *LamoidEnv) WriteConfig(respBytes []byte) { + + yamlFile, err := os.Create("config.yaml") + if err != nil { + return + } + + defer func() { + err = yamlFile.Close() + if err != nil { + log.Printf("[YAML-WRITE-ERROR]: %s", err) + } + }() + + _, writeErr := yamlFile.Write(respBytes) + + if writeErr != nil { + log.Printf("[YAML-WRITE-ERROR]: %s", err) + } + +} + +//WriteTempConfig - Accept []bytes that will be written to the local node as tmp-config.yaml +func (g *LamoidEnv) WriteTempConfig(respBytes []byte) { + + yamlFile, err := os.Create("tmp-config.yaml") + if err != nil { + return + } + + defer func() { + err = yamlFile.Close() + if err != nil { + log.Printf("[YAML-WRITE-ERROR]: %s", err) + } + }() + + _, writeErr := yamlFile.Write(respBytes) + + if writeErr != nil { + log.Printf("[YAML-WRITE-ERROR]: %s", err) + } + +} + +//ReadConfig - Read the local configuration file, used to compare new and old config. +func (g *LamoidEnv) ReadConfig(configFile string) []byte { + + var configRawData []byte + + configReader, err := os.Open(configFile) + if err != nil { + log.Print("There was a problem reading config.yaml") + } + + defer func() { + err := configReader.Close() + + if err != nil { + log.Print("There was a problem closing config.yaml") + } + }() + + _, readErr := configReader.Read(configRawData) + if readErr != nil { + log.Print("There was a problem reading config file to raw bytes.") + } + + return configRawData + +} + +//ValidateConfig - Validates the new and current running config via MD5 Hash. +func (g *LamoidEnv) ValidateConfig() bool { + + var config []byte + + for { + configBytes, err := g.GrazeConfig() + if err != nil { + log.Printf("[CONFIG-ERROR]: There was and Error getting the config, %s", err) + continue + } + config = configBytes + break + } + + g.WriteTempConfig(config) + + newConfig := md5.Sum(g.ReadConfig("tmp-config.yaml")) + + currentConfig := md5.Sum(g.ReadConfig("config.yaml")) + + log.Printf("[NEW-CONFIG]: Hash - %s", fmt.Sprint(newConfig)) + log.Printf("[OLD-CONFIG]: Hash - %s", fmt.Sprint(currentConfig)) + + os.Remove("tmp-config.yaml") + + return cmp.Equal(newConfig, currentConfig) + +} + +//StartGrazing - Get ya Graze on LLAMA..... +func (g *LamoidEnv) StartGrazing() { + + var config []byte + + //Initial Run + g.StartReflector() + + log.Print("[LAMOID-INIT]: Waiting for Llama Server....") + + for { + err := g.GrazeAnatomy() + if err != nil { + log.Printf("[LAMOID-INIT]: Registration Failed. Error - %s", err) + log.Print("[LAMOID-INIT]: Trying Again....") + time.Sleep(time.Second * 10) + continue + } + break + } + + //Give the LLama sometime to eat....sheeeeeeshhhh + time.Sleep(time.Second * 10) + + for { + configBytes, err := g.GrazeConfig() + if err != nil { + log.Printf("[CONFIG-ERROR]: There was and Error getting the config, %s", err) + continue + } + config = configBytes + break + } + + g.WriteConfig(config) + + g.StartCollector() +} + +//Graze - Why you are here. +func (g *LamoidEnv) Graze() { + // Main Loop for running the llama-probe + g.StartGrazing() + +Graze: + for { + time.Sleep(time.Second * 60) + log.Printf("[LAMOID-INFO]: Polling Config") + switch g.ValidateConfig() { + case true: + for { + err := g.GrazeAnatomy() + if err != nil { + log.Printf("[LAMOID-INIT]: Registration Failed. Error - %s", err) + log.Print("[LAMOID-INIT]: Trying Again....") + time.Sleep(time.Second * 10) + continue + } + break + } + continue Graze + case false: + + var config []byte + + log.Printf("[LAMOID-INFO]: New Config Detected - Reloading Collector") + + log.Printf("[LAMOID-INFO]: Updating LLAMA SERVER Registration") + + for { + err := g.GrazeAnatomy() + if err != nil { + log.Printf("[LAMOID-INIT]: Registration Failed. Error - %s", err) + log.Print("[LAMOID-INIT]: Trying Again....") + time.Sleep(time.Second * 10) + continue + } + break + } + + log.Printf("[LAMOID-INFO]: Writing New Config") + + time.Sleep(time.Second * 10) + + for { + configBytes, err := g.GrazeConfig() + if err != nil { + log.Printf("[CONFIG-ERROR]: There was and Error getting the config, %s", err) + continue + } + config = configBytes + break + } + + g.WriteConfig(config) + + log.Printf("[LAMOID-INFO]: Reloading Collector with new config") + + err := g.Collector.Process.Signal(syscall.SIGHUP) + if err != nil { + log.Printf("[LAMOID-ERR]: There was a problem trying to send SIGHUP to collector process, %s", err) + } + + continue Graze + } + } +} diff --git a/llama-probe/lamoid/alpaca/structs.go b/llama-probe/lamoid/alpaca/structs.go new file mode 100644 index 0000000..c4b71ac --- /dev/null +++ b/llama-probe/lamoid/alpaca/structs.go @@ -0,0 +1,81 @@ +package alpaca + +import "os/exec" + +// Create struct for JSON we send to the server for registration +type PayLoad struct { + Port int `json:"port"` + Keepalive int `json:"keepalive,omitempty"` + Ip string `json:"ip,omitempty"` + Tags struct { + Version string `json:"version"` + ProbeShortname string `json:"probe_shortname"` + ProbeName string `json:"probe_name"` + } `json:"tags"` + Group string `json:"group,omitempty"` +} + +// LamoidEnv struct containing the running environment information +// for the grazzing llama probe. +type LamoidEnv struct { + SourceIP string `env:"LLAMA_SOURCE_IP"` + Server string `env:"LLAMA_SERVER"` + Group string `env:"LLAMA_GROUP"` + Port int `env:"LLAMA_PORT"` + KeepAlive int `env:"LLAMA_KEEPALIVE"` + ProbeName string `env:"PROBE_NAME"` + ProbeShortName string `env:"PROBE_SHORTNAME"` + Reflector *exec.Cmd + Collector *exec.Cmd +} + +// YAML Config Strongly Typed Maybe we can use this one day..... +type LLamaConfig struct { + Summarization struct { + Interval int `yaml:"interval"` + Handlers int `yaml:"handlers"` + } `yaml:"summarization"` + API struct { + Bind string `yaml:"bind"` + } `yaml:"api"` + Ports struct { + Default struct { + IP string `yaml:"ip"` + Port int `yaml:"port"` + Tos int `yaml:"tos"` + Timeout int `yaml:"timeout"` + } `yaml:"default"` + } `yaml:"ports"` + PortGroups struct { + Default []struct { + Port string `yaml:"port"` + Count int `yaml:"count"` + } `yaml:"default"` + } `yaml:"port_groups"` + RateLimits struct { + Default struct { + Cps float64 `yaml:"cps"` + } `yaml:"default"` + } `yaml:"rate_limits"` + Tests []struct { + Targets string `yaml:"targets"` + PortGroup string `yaml:"port_group"` + RateLimit string `yaml:"rate_limit"` + } `yaml:"tests"` + Targets struct { + Default []struct { + IP string `yaml:"ip"` + Port int `yaml:"port"` + Tags struct { + Version string `yaml:"version"` + ProbeShortname string `yaml:"probe_shortname"` + ProbeName string `yaml:"probe_name"` + DstName string `yaml:"dst_name"` + DstShortname string `yaml:"dst_shortname"` + SrcName string `yaml:"src_name"` + SrcShortname string `yaml:"src_shortname"` + Group string `yaml:"group"` + } `yaml:"tags"` + } `yaml:"default"` + } `yaml:"targets"` +} diff --git a/llama-probe/lamoid/go.mod b/llama-probe/lamoid/go.mod new file mode 100644 index 0000000..2f61ab3 --- /dev/null +++ b/llama-probe/lamoid/go.mod @@ -0,0 +1,8 @@ +module lamoid + +go 1.17 + +require ( + github.com/Netflix/go-env v0.0.0-20210215222557-e437a7e7f9fb + github.com/google/go-cmp v0.5.6 +) diff --git a/llama-probe/lamoid/go.sum b/llama-probe/lamoid/go.sum new file mode 100644 index 0000000..4c3edbc --- /dev/null +++ b/llama-probe/lamoid/go.sum @@ -0,0 +1,6 @@ +github.com/Netflix/go-env v0.0.0-20210215222557-e437a7e7f9fb h1:w9IDEB7P1VzNcBpOG7kMpFkZp2DkyJIUt0gDx5MBhRU= +github.com/Netflix/go-env v0.0.0-20210215222557-e437a7e7f9fb/go.mod h1:9XMFaCeRyW7fC9XJOWQ+NdAv8VLG7ys7l3x4ozEGLUQ= +github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ= +github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4= +golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= diff --git a/llama-probe/lamoid/main.go b/llama-probe/lamoid/main.go new file mode 100644 index 0000000..f1df8d3 --- /dev/null +++ b/llama-probe/lamoid/main.go @@ -0,0 +1,21 @@ +package main + +import ( + "errors" + "lamoid/alpaca" + "log" + + env "github.com/Netflix/go-env" +) + +func main() { + + var llama alpaca.LamoidEnv + + _, err := env.UnmarshalFromEnviron(&llama) + if err != nil || errors.Is(err, env.ErrUnexportedField) { + log.Fatalf("[ENV-ERR]: There was a problem with one or more expected environment: %s", err) + } + + llama.Graze() +} diff --git a/llama-probe/entrypoint.sh b/llama-probe/legacy/entrypoint.sh similarity index 100% rename from llama-probe/entrypoint.sh rename to llama-probe/legacy/entrypoint.sh diff --git a/llama-probe/register.go b/llama-probe/legacy/register.go similarity index 100% rename from llama-probe/register.go rename to llama-probe/legacy/register.go