Conversation
bucket if it does not already exist.
rciorba
left a comment
There was a problem hiding this comment.
Thanks for your PR! Glacier storage seems quite useful.
Adding some tests for this new feature would be nice ;)
z3/snap.py
Outdated
| lifecycle_rule_name = "z3 transition" | ||
| if args.s3_prefix: | ||
| lifecycle_rule_name += " " + args.s3_prefix.replace("/", " ") | ||
| while lifecycle_rule_name.endswith(" "): |
There was a problem hiding this comment.
You want to use string.rstrip(" ")
There was a problem hiding this comment.
Another holdover, I was originally changing / to - :). Changed.
z3/snap.py
Outdated
| except boto.exception.S3ResponseError as e: | ||
| if e.error_code == 'NoSuchBucket': | ||
| # Let's try creating it | ||
| bucket = s3.create_bucket(bucket_name) |
| else: | ||
| raise | ||
|
|
||
| try: |
There was a problem hiding this comment.
Can you please extract the whole bucket life-cycle handling section to a separate function?
z3/snap.py
Outdated
| except AttributeError: | ||
| # This seems to be if the FakeKey object doesn't have the ongoing_restore attribute | ||
| pass | ||
| except: |
There was a problem hiding this comment.
There's no need to do a blanket except and re-raise, that's Python's default behaviour.
There was a problem hiding this comment.
My original code had some other there, I changed it to a pass simply as a placeholder. Fixed.
z3/snap.py
Outdated
| current_snap.key.restore(days=5) | ||
| raise Exception('snapshot {} is currently in glacier storage, requesting transfer now'.format(snap_name)) | ||
| except AttributeError: | ||
| # This seems to be if the FakeKey object doesn't have the ongoing_restore attribute |
There was a problem hiding this comment.
Ignoring the AttributeError to get the tests to pass is not very nice. Maybe now it's just because of FakeKey.ongoing_restore, but in the future it could mask a legitimate bug.
A better course of action would be to change FakeKey to this:
class FakeKey(object):
def __init__(self, name, metadata=None, storage_class="STANDARD_IA"):
self.name = name
self.key = name
self.metadata = metadata
self.size = 1234
self.ongoing_restore = None
self.storage_class = storage_class
Then drop the exception handling all-together.
There was a problem hiding this comment.
Okay, making that change in my branch.
| transition=transition) | ||
| print("lifecycle = {}".format(lifecycle.to_xml()), file=sys.stderr) | ||
| if lifecycle is not None: | ||
| bucket.configure_lifecycle(lifecycle) |
There was a problem hiding this comment.
Shouldn't the transition to GLACIER happen only on upload?
I'm especially concerned by the fact this alters the storage class even if ran with --dry-run. A dry run should have no side effects.
There was a problem hiding this comment.
One of the reasons I did that was that I have it set up to remove the lifecycle rule if use_glacier is no longer in the configuration.
Changed it.
| if current_snap.key.ongoing_restore == True: | ||
| raise Exception('snapshot {} is currently being restore from glocier; try again later'.format(snap_name)) | ||
| if current_snap.key.storage_class == "GLACIER": | ||
| current_snap.key.restore(days=5) |
There was a problem hiding this comment.
I might be getting it wrong but: it seems this only handles glacier-restore for the most recent snapshot. Shouldn't all 'to-be-downloaded' snapshots be glacier-restored in one run?
Would it not make sense to move the glacier-restore logic in to the loop that also figures out which snapshots need downloading, so it can ask for them all to be restored?
There was a problem hiding this comment.
I haven't done a restore at all, so it wasn't clear to me what I needed to do. That would be loop that adds to to_restore?
z3/snap.py
Outdated
| try: | ||
| if current_snap.key.ongoing_restore == True: | ||
| raise Exception('snapshot {} is currently being restore from glocier; try again later'.format(snap_name)) | ||
| if current_snap.key.storage_class == "GLACIER": |
There was a problem hiding this comment.
Hmm... their documentation states the storage class should remain GLACIER. Is that not the case? http://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html
There was a problem hiding this comment.
Experimenting now. It'll take a few hours for it to happen. I was going by the boto documentation, which seemed to imply it.
There was a problem hiding this comment.
Confirmed via experimentation. The expiration count is pretty arbitrary, but I am not sure the best way to make that configurable.
* Add the glacier-related attributes to FakeKey. * Because of that, don't catch AttributeError. Yay * Don't change the key's storage class in restore if dry_run. * Move the lifecycle management into a function * Related to that, only call if it dry_run is false. * Somewhat-related to that, only create the bucket if the subcommand is backup. This still creates the bucket even if dry run is given.
…"glacier" after an object is restored. If ongoing_restore is None, then the object has not been restored.
|
Any idea why this died? |
|
Project abandonment. |
|
Project abandonment, i presume. Alas. |
|
Any chance you can do it to this forked repo? |
Adds an option and config file entry for glacier support; also tries to create the specified bucket if it doesn't exist. I haven't fully tested the restore part.