Would be a bit of work but would resolve a few current issues we didn't see when we implemented these.
-
ICA Storage Credentials at a 'bucket level prefix' (byob-icav2), create issues for sharing said credentials across the tenant. Instead we could have 'service' specific credentials for specific accounts. We also created credentials for a given prefix rather than for a given purpose. A reset here could mean different credentials for different purposes.
-
Use of IAM roles, see https://help.ica.illumina.com/home/h-storage/s-awss3/iam-role-method when creating our storage credentials, new method that allows copying of objects that contain s3 tags.
-
This could also come with a split of 'clinical' and 'research' projects too. We have a lot of our microservice systems sorted and generalised now. It's easy enough for us to run analyses over multiple different projects. This could also mean running analyses for research purposes in the project of the research group / data custodian, rather than just having everything dumped in 'prod' and then sending out the primary fastqs.
Would be a bit of work but would resolve a few current issues we didn't see when we implemented these.
ICA Storage Credentials at a 'bucket level prefix' (byob-icav2), create issues for sharing said credentials across the tenant. Instead we could have 'service' specific credentials for specific accounts. We also created credentials for a given prefix rather than for a given purpose. A reset here could mean different credentials for different purposes.
Use of IAM roles, see https://help.ica.illumina.com/home/h-storage/s-awss3/iam-role-method when creating our storage credentials, new method that allows copying of objects that contain s3 tags.
This could also come with a split of 'clinical' and 'research' projects too. We have a lot of our microservice systems sorted and generalised now. It's easy enough for us to run analyses over multiple different projects. This could also mean running analyses for research purposes in the project of the research group / data custodian, rather than just having everything dumped in 'prod' and then sending out the primary fastqs.