Releases: alephdata/servicelayer
v1.25.2
v1.25.0
What's Changed
- Enable Workload Identity authentication. By setting
GOOGLE_SERVICE_ACCOUNT_EMAILto a valid service account email clients the google storage client signing URLs can use Workload Identity. In this case the service account token will be ignored and doesn't need to be mounted any more. #247
Full Changelog: v1.24.1...v1.25.0
v1.24.1
What's Changed
- Add an optional reason string when using
backoffin a retry loop by @stchris in 9d6218b - Fixed some potentially unbound variables in the error reporting logic by @stchris in 0a43012
- Reduce the amount of retries when waiting for a task to be picked up and show up in Redis by @stchris in e41bfa3
- Allow newer google-cloud-storage versions, which have improved default retry logic built in by @stchris in 8e73aea
Full Changelog: v1.23.0...v1.24.1
v1.24.0
v1.23.3
What's Changed
- Make servicelayer workers dump their stacktrace on receiving SIGUSR1 by @stchris in 10881f1
- Use the reentrant safe SimpleQueue instead of Queue for keeping track of RabbitMQ messages by @stchris in 763de10
- Make
redisan explicit requirements by @stchris in f67797a - Disable Sentry's ThreadingIntegration by @stchris in ddbb698
Full Changelog: v1.23.0...v1.23.3
v1.23.2
v1.23.1
What's changed
Bugfixes
- Proper clean-up of tasks which have exhausted the maximum number of retries by @catileptic and @stchris in #210
Full Changelog: v1.23.0...v1.23.1
v1.23.0
The custom messaging queue used by Aleph has been replaced with RabbitMQ. As of this version of servicelayer, Aleph will use a persistent messaging queue. We have seen an increase in stability, predictability and also in the clarity of debugging since making these changes.
The implementation uses a Default, direct Exchange. RabbitMQ allows users to monitor the activity of the messaging queues using a management interface that one can access from the browser, if the proper port is exposed.
In order to populate the System Status view in Aleph, Redis is used to independently track the state of tasks. job_ids, instead tracking tasks (task_ids). The structure of Redis keys has also changed as follows:
Redis keys used by the Dataset object:
tq:qdatasets: set of allcollection_ids of active datasets (a dataset is considered active when it has either running or pending tasks)tq:qdj:<dataset>:taskretry:<task_id>: the number of timestask_idwas retried
All of the following keys refer to task_ids or statistics about tasks per a certain dataset (collection_id):
tq:qdj:<dataset>:finished: number of tasks that have been marked as "Done" and for which an acknowledgement is also sent by the Worker over RabbitMQ.tq:qdj:<dataset>:running: set of alltask_ids of tasks currently running. A "Running" task is a task which has been checked out, and is being processed by a worker.tq:qdj:<dataset>:pending: set of alltask_ids of tasks currently pending. A "Pending" task has been added to a RabbitMQ queue (via abasic_publishcall) by a producer (an API call, a UI action etc.).tq:qdj:<dataset>:start: the UTC timestamp when either the firsttask_idhas been added to a RabbitMQ queue (so, we have our first Pending task) or the timestamp when the firsttask_idhas been checked out (so, we have our first Running task). Thestartkey is updated when the first task is handed to a Worker.tq:qdj:<dataset>:last_update: the UTC timestamp from the latest change to the state of tasks running for a certaincollection_id. This is set when: a new task is Pending, a new task is Running, a new task is Done, a new task is canceled.tq:qds:<dataset>:<stage>: a set of alltask_ids that are either running or pending, for a certain stage.tq:qds:<dataset>:<stage>:finished: number of tasks that have been marked as "Done" for a certain stage.tq:qds:<dataset>:<stage>:running: set of alltask_ids of tasks currently running for a certain stage.tq:qds:<dataset>:<stage>:pending: set of alltask_ids of tasks currently pending for a certain stage.
Tasks are assigned a random priority before being added to the appropriate queues to ensure a fair distribution of execution. The current implementation also allows admin users of Aleph to chose to assign a task either a global minimum priority or a global maximum priority.
What's Changed
- Adds a last_updated timestamp to the dataset status by @stchris in #136
- Pin moto because of breaking changes in version 5.0 + by @stchris in #155
- Remove unused GitHub Actions workflow by @tillprochaska in #154
- Standardize development dependencies / refactor GHA workflow by @tillprochaska in #153
Dependency upgrades
- Bump black from 23.9.1 to 23.11.0 by @dependabot in #135
- Bump wheel from 0.41.2 to 0.42.0 by @dependabot in #134
- Bump prometheus-client from 0.17.1 to 0.19.0 by @dependabot in #133
- Bump ruff from 0.0.292 to 0.1.8 by @dependabot in #138
- Bump pytest from 7.4.2 to 7.4.3 by @dependabot in #121
- Bump pytest-env from 1.0.1 to 1.1.3 by @dependabot in #132
- Bump pytest-mock from 3.11.1 to 3.12.0 by @dependabot in #126
- Update development dependencies in groups by @stchris in #139
- Bump the dev-dependencies group with 1 update by @dependabot in #140
- Bump fakeredis from 2.19.0 to 2.20.1 by @dependabot in #141
- Release 1.22.2 by @tillprochaska in #167
- Bump the dev-dependencies group with 6 updates by @dependabot in #170
- Bump fakeredis from 2.20.1 to 2.22.0 by @dependabot in #168
- Bump prometheus-client from 0.19.0 to 0.20.0 by @dependabot in #159
- Bump structlog from 23.2.0 to 24.1.0 by @dependabot in #151
- Release/1.23.0 by @stchris in #143
Full Changelog: v1.22.1...v1.23.0
v1.22.2
This release includes a fix for the archive functionality in servicelayer. Previously, the generate_url methods of the Google Cloud Storage archive adapter and the AWS S3 archive adapter were generating URLs instructing AWS S3 and Google Cloud Storage to send a Content-Disposition: inline header in the response.
When sending this header, most browsers will automatically open the file if the file’s MIME type is supported by the browser. This may not be desired in some cases, for example when downloading files from untrustworthy sources.
Starting with this version of servicelayer, the generated URLs will instead instruct AWS S3 and Google Cloud Storage to send a Content-Disposition: attachment header. Browsers won’t open files without user interaction if this header is set.
v1.22.1
What's Changed
- Change default port for Prometheus metrics endpoint to 9100 by @tillprochaska in #129
- Misc Promethus changes by @tillprochaska in #130
Full Changelog: v1.22.0...v1.22.1