Collection of containers to run the Enroll ecosystem in a development context
Download and install Docker Desktop. Important; this requires Docker Desktop 4.31 and Docker Compose 2.24 or higher.
Clone the ea_enterprise repo:
git clone https://github.com/ideacrew/ea_enterprise.git
Copy the env.dev.example to env.dev and change the values as needed (don't commit env files)
cp env.dev.example env.dev
Ensure the other repositories are present on your local machine and at the same level as ea_enterprise:
- enroll
- fdsh_gateway
- medicaid_gateway
- medicaid_eligibility
- aca_entities
- polypress
- fti
the layout should look like this (you can change the projects folder)
~ /
└── projects/
├── ea_enterprise
├── enroll
├── fdsh_gateway
├── medicaid_gateway
├── aca_entities
├── fti
└── polypress
Be sure all repositories are up to date with their respective release branches before running any commands! The following command will update each of the sibling directories to the latest primary branch:
for dir in . ../enroll ../fdsh_gateway ../medicaid_gateway ../aca_entities ../fti ../polypress; do
echo "Pulling trunk in $(basename $dir)..."
pushd $dir > /dev/null
git checkout $(git branch --format='%(refname:short)' | egrep '^(main|trunk)$') && git pull
popd > /dev/null
done
-
inside the container: this means we will run a command inside the container (or "guest"), for example,
docker compose exec enroll /bin/bashwill run the command/bin/bashinside the containerenrollfrom there we can execute any command that is available inside the container, for example,rails cwill open a rails console inside the container -
outside the container: this means we will run a command on the "host", usually in the context of the developer's Mac.
By default, the docker-compose.yml is configured to only launch enroll and the required services to run it, however, other profiles can be used to launch other services, the profiles are:
- all: launches all services
- polypress: launches polypress
- medicaid_gateway: launches medicaid_gateway / mitc
- keycloak: launches keycloak
- fdsh_gateway: launches fdsh_gateway
- fti: launches fti
- enroll_yard_doc: launches enroll documentation site
the profiles can be used like this:
docker compose --profile polypress up
docker compose --profile all up
you can also use multiple profiles at the same time
docker compose --profile polypress --profile medicaid_gateway up
Some of the workarounds are focused on making enroll running on Arm based Macs, technically speaking on intel Macs it can be installed directly on the host. You can use ea_enterprise on intel Macs, just comment the M1 hacks.
The services fdsh_gateway, enroll has "localhost" as the "server" on config/mongoid.yml, the Docker Compose configuration "patches" this by mounting anther configuration, the "patched" configuration is located at config/mongoid.yml.docker and is mounted on the container at /{APP}/config/mongoid.yml
aca_entities is already mounted and can be changed in the Gemfile - this allows local aca_entities modifications to be reflected when a rails server/console process is bounced, with no push to a remote or even bundle update/install required.
The trick to using the local gem is just to modify the already-built service you are working with (i.e., only Enroll if you are updating Enroll's Gemfile).
docker compose buildif it has not run already (either on its own or as part of a previous run ofdocker compose up)- wait
- modify the gemfile to point to the mounted file path:
gem 'aca_entities', path: "/aca_entities" - start just enroll via the ui or command line (do not restart everything)
Note that running a build with this local gem configuration in place will not currently work.
For remote gem additions/updates, you can't successfully start the container/server after mod'ing the Gemfile. Building (docker compose build) after updating the Gemfile also produces an image that can't successfully start.
The solution is to start from an existing runnable container (Gemfile matches Gemfile.lock, and gems in lockfile are installed on the guest). Run the container and wait until fully started, then mod the Gemfile (leaving the container running).
You should then be able to docker compose exec <the_service> bundle install in another terminal window. This will install the new/updated gem, and update the Gemfile.lock file on the guest and host - you can then commit the Gemfile.lock.
Further restarts of the container should be fine, and re-builds should also proceed happily.
Docker Compose will patch MG stimulus reflex initializer to ignore dev:cache not being enabled, this is done via a volume mount on the docker-compose.yml file, the file is located at hotpatches/stimulus_reflex.rb and is mounted on the container at /{APP}/config/initializers/stimulus_reflex.rb.
Wicked pdf is patched to use the "local" binary (not the one from the wkhtmltopdf-binary-edge gem), this is done via a volume mount on the docker-compose.yml file, the file is located at hotpatches/wicked_pdf.rb and is mounted on the container at /{APP}/config/initializers/wicked_pdf.rb. The wicked pdf binary is installed on the container at /usr/local/bin/wkhtmltopdf
Note: commands must be run in the terminal from inside the ea_enterprise directory.
- Start all services
docker compose up
- Start enroll and any of its dependencies
docker compose up enroll
- Start 2 services
docker compose up enroll fdsh_gateway
- Rebuild all the containers
docker compose build
- Rebuild specific container (example; enroll container)
docker compose build enroll
- Shell inside a container
docker compose exec enroll /bin/bash
- Rubocop inside enroll
docker compose exec enroll /enroll/rubocop_check_last_commit.sh
docker compose exec enroll /enroll/rubocop_check_pre_commit.sh
- docker compose "exec vs run"
The slight difference between exec and run is that run will create a new container, and exec will run the command on an existing container, examples;
- this will execute /bin/bash under the running container enroll. If there is no enroll running, the command will fail.
docker compose exec enroll /bin/bash
- this will create a new container and inside it will execute /bin/bash. If there is a container running, it will not be affected and create a new one in parallel. If there is no enroll running it will not fail.
docker compose run enroll /bin/bash
With a mongodb compose service running, and a directory-based dump on your local(/host), run
mongo_restore_dump.sh from your local. A bulk restore of multiple DB's or a targeted
single DB (allowing for a DB rename) are supported (see script for details):
./scripts/mongo_restore_dump.sh [-r <rename_db_to>] <dump_dir_path>
- start a rails console (inside the container):
rails c
- start a rails console (from outside, to the container)
docker compose exec enroll rails c
- run tests (inside the container):
RAILS_ENV=test bundle exec rspec components/financial_assistance/spec/
- run test outside the container (cd to ea_enterprise first):
docker compose exec -e "RAILS_ENV=test" enroll bundle exec rspec components/financial_assistance/spec/
Very similar to the rules regarding when to restart a rails application
- You changed something under /config
- You need to switch branches
- Additionally, if you alter a env variable set via docker configuration 'run-time'
*.env.devfiles
- you added or deleted a gem
- you changed anything in the docker-compose.yml file
- you changed the Dockerfile of any container
- you alter env variables set via the docker configuration 'build-time'
.envfile
The IdeaCrew GHA runs the specs as "engines" that means it runs the specs for each component separately from the main enroll app. This can now be run under docker by following this method.
- start the containers
docker compose up
- open a shell on the running enroll
docker compose exec enroll bash
- go to the component you want to run the specs for example financial assitance
cd components/financial_assistance
- bundle install
bundle install
- run the specs
bundle exec rspec
It is possible to run cucumber, however, there are 2 drawbacks; first is that the gem webdrivers have to be removed manually and the container restarted, and the second drawback is that it uses a "modified" env.rb that removes all the references to the webdriver gem, this is done automatically via a virtual volume on docker compose (as end user you don't need to worry about this unless something big changes on cucumber).
These are the steps to enable cucumber
- On enroll edit the Gemfile and remove:
gem 'webdrivers', '~> 3.0' - Trigger a restart for enroll with
docker compose restart enroll
- run cucumber
- from inside the container (attach a shell to the container first)
NODE_ENV=test RAILS_ENV=test bundle exec cucumber features/financial_assistance/view_eligibility.feature
- from outside the container (inside the ea_enterprise directory):
docker compose exec -e "RAILS_ENV=test" enroll bundle exec cucumber features/financial_assistance/view_eligibility.feature
After rebuilding the container for the first time, only step 3 is needed
- rspec
uninitialized constant Mongoid::Matchers
you are not running the specs on test environment, run them with RAILS_ENV=test bundle exec rspec
Docker Compose supports image build-time and container run-time environment variables.
By default, Compose will use a .env file in the project root to interpolate variables in for references (e.g.,
$MYVARIABLE) in docker-compose.yml. (It will also collect some host machine environment variables for use as well,
like $PWD.) These can be referenced anywhere in docker-compose.yml, and we pass some to a given service's Dockerfile
via <service>.build.args - These are picked up by Dockerfile ARG instructions and are used in the build. An example
is setting image environment variables (via Dockerfile ENV instructions) - these variables then available in the
building image, and are also available at run time. Changes to these typically require an image rebuild. To minimize
host env var collisions, increase flexibility, and keep clarity of intent, it's suggested that build-time vars be
limited to what's needed to run a successful build.
The docker-compose.yml uses a couple mechanisms Compose gives for setting container run-time variables
(<service>.env_file and <service>.environment). The bulk of environment settings fed to Rails processes are here,
and will require only a container restart to see a change. A common convention in this project is using files named like
.env.dev to provide these run-time values (via docker-compose.yml env_file key). Note that new processes started in
a running container (i.e., using docker compose exec), will not see variable changes until that original container is
restarted.
Generally applicable variables are provided under the ./site directory, as are service-specific ones - these are
provided default values in env.dev.base files. These base files typically won't need customization, but, overrides
that may be required can be placed in your root .env.dev.
(Note in some case, interpolated-in build-time variables are being fed to run-time variables via environment entries
in the docker-config.yml. This allows for overrides for a workflow optimization - changes to these build-time variables
can be seen in containers, without a rebuild.)
Due to some customers going outside the default IC repo, ea_enterprise has a way to change the expected directory of an app, and also the ability to inject a GitHub token for private repos. This is the way to enable this
- create a file called .env in the root of the ea_enterprise directory, this will be loaded at build time and the values will be used for that.
To change the directory of a microservice, you can use the following format
ENROLL_DIR=enroll-dc-repo
POLY_DIR=dchbx-repos/polypress
MG_DIR=dchbx-repos/medicaid_gateway
FTI_DIR=dchbx-repos/fti
MITC_DIR=dchbx-repos/medicaid_elegibility
FDSH_DIR=dchbx-repos/fdsh_gateway
To inject a github token, add this line to the .env file
BUNDLE_GITHUB__COM=x-access-token:<token>
-
Some people complain about "writing" speed, and it's true, it's slow, however on the "experimental features", there is a new option called "VirtioFS" and it's fast, close to native fast, the recommendation is to enable it
-
When changing env vars or other things that require a container restart, it can be useful to note that, for example, you don't need to wait for the enroll web app to completely start (via
docker compose up) to start up the rails console viadocker compose exec- once bundler is done installing gems to the container, you can likely proceed to use it in a second process. -
Changes to
hotpatchesfiles can be seen in the guest without a rebuild or a restart of the primary container process. So, if you have enroll rails server running, and you have a separate rails console process running viadocker compose exec, you can just restart your rails console process and see hotpatch changes in that process.
-
how do I enter the rails console? use
docker compose exec <service_name> rails c. Or, in the Docker Desktop UI there is a button to open a terminal quickly. From there,rails cas normal. -
why we are not executing bundle install on docker compose? <- it's slower, and in theory, gems should not change that much
-
why the shell inside the container doesn't respond to the arrows or any other shell nicety? (usually, this happens when using the docker UI) it's because it's executing /bin/sh, you can execute /bin/bash as soon as the terminal is open
- add scripts to open the console quickly
- add scripts to run rubocop/tests
- fix/add the Nginx in front of the services that have them on production (the one in Medicaid gateway is not fully configured yet)
