Skip to content

Sourcery refactored master branch#1

Open
sourcery-ai[bot] wants to merge 1 commit intomasterfrom
sourcery/master
Open

Sourcery refactored master branch#1
sourcery-ai[bot] wants to merge 1 commit intomasterfrom
sourcery/master

Conversation

@sourcery-ai
Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot commented Dec 4, 2023

Branch master refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the master branch, then run:

git fetch origin sourcery/master
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai bot requested a review from tatstratasys December 4, 2023 15:25
def __init__(self, filename):
self.filename = filename
paths = glob.glob(os.path.join(SCRIPT_DIR, "images", self.filename + ".*"))
paths = glob.glob(os.path.join(SCRIPT_DIR, "images", f"{self.filename}.*"))
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function Image.__init__ refactored with the following changes:

Comment on lines -98 to +142
def gaussian_cdf_approx(x, radius):
return 0.5 * (1 + tf.tanh(1.12 * x / (math.sqrt(2.) * radius)))
def gaussian_cdf_approx(x, radius):
return 0.5 * (1 + tf.tanh(1.12 * x / (math.sqrt(2.) * radius)))

def gaussian_cdf(x, radius):
return 0.5 * (1 + tf.erf(x / (math.sqrt(2.) * radius)))
def gaussian_cdf(x, radius):
return 0.5 * (1 + tf.erf(x / (math.sqrt(2.) * radius)))

dims = inputs.shape[-1]
with tf.name_scope(name):
# When there are no input dims, there is nothing to encode.
# This special case is needed because tf.reshape does strange
# things when 0-dims are involved.
if dims == 0:
return inputs
results = []
boundaries = tf.linspace(0., 1., self.n_bins + 1)
boundaries = tf.reshape(boundaries, [1 for _ in inputs.shape] + [-1])
dims = inputs.shape[-1]
with tf.name_scope(name):
# When there are no input dims, there is nothing to encode.
# This special case is needed because tf.reshape does strange
# things when 0-dims are involved.
if dims == 0:
return inputs
results = []
boundaries = tf.linspace(0., 1., self.n_bins + 1)
boundaries = tf.reshape(boundaries, [1 for _ in inputs.shape] + [-1])

for level in range(self.n_levels):
with tf.name_scope(f"level{level}"):
scale = self.n_bins**level
for level in range(self.n_levels):
with tf.name_scope(f"level{level}"):
scale = self.n_bins**level

# We use the absolute value here just in case the inputs are erroneously negative.
# Even a negative epsilon would totally wreck the following code.
if level == 0:
scaled = tf.abs(inputs)
else:
scaled = tf.abs(inputs * scale) % 1
# We use the absolute value here just in case the inputs are erroneously negative.
# Even a negative epsilon would totally wreck the following code.
scaled = tf.abs(inputs) if level == 0 else tf.abs(inputs * scale) % 1
diffs = boundaries - scaled[..., tf.newaxis]
cdfs = gaussian_cdf_approx(diffs, self.radius)
result = cdfs[...,1:] - cdfs[...,:-1]

diffs = boundaries - scaled[..., tf.newaxis]
cdfs = gaussian_cdf_approx(diffs, self.radius)
result = cdfs[...,1:] - cdfs[...,:-1]
# print_op = tf.print("result: ", result)

# print_op = tf.print("result: ", result)
# In the outermost level we don't want to carry over...
# otherwise we introduce ambiguities.
if level != 0 or wraparound:
cdfs_right = gaussian_cdf_approx(diffs + 1., self.radius)
cdfs_left = gaussian_cdf_approx(diffs - 1., self.radius)
result = result + cdfs_right[...,1:] - cdfs_right[...,:-1] + cdfs_left[...,1:] - cdfs_left[...,:-1]

# In the outermost level we don't want to carry over...
# otherwise we introduce ambiguities.
if level != 0 or wraparound:
cdfs_right = gaussian_cdf_approx(diffs + 1., self.radius)
cdfs_left = gaussian_cdf_approx(diffs - 1., self.radius)
result = result + cdfs_right[...,1:] - cdfs_right[...,:-1] + cdfs_left[...,1:] - cdfs_left[...,:-1]
# with tf.control_dependencies([print_op]):
result = result / scale

# with tf.control_dependencies([print_op]):
result = result / scale
results.append(result)

results.append(result)

result = tf.concat(results, axis=-1)
result = tf.reshape(result, [-1, self.n_bins * self.n_levels * dims])
return result
result = tf.concat(results, axis=-1)
result = tf.reshape(result, [-1, self.n_bins * self.n_levels * dims])
return result
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function OneBlob.__call__ refactored with the following changes:

Comment on lines -155 to +151
args = parser.parse_args()
return args
return parser.parse_args()
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_args refactored with the following changes:

Comment on lines -218 to +213
if gradients and not all(grad is None for grad in gradients):
if gradients and any(grad is not None for grad in gradients):
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_train_op refactored with the following changes:

Comment on lines -237 to +234
output_tensor = linear_layer(current_tensor, target_fun.n_channels, tf.float16, f"fc_out", False)
output_tensor = linear_layer(
current_tensor, target_fun.n_channels, tf.float16, "fc_out", False
)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function make_graph refactored with the following changes:

Comment on lines -235 to +240
ninja.variable(toolchain + 'builddir', builddir)
ninja.variable(f'{toolchain}builddir', builddir)
else:
builddir = ''

ninja.variable(toolchain + 'defines', config.defines[toolchain] or [])
ninja.variable(toolchain + 'includes', config.includes[toolchain] or [])
ninja.variable(toolchain + 'cflags', config.cflags[toolchain] or [])
ninja.variable(toolchain + 'cxxflags', config.cxxflags[toolchain] or [])
ninja.variable(toolchain + 'ldflags', config.ldflags[toolchain] or [])
ninja.variable(f'{toolchain}defines', config.defines[toolchain] or [])
ninja.variable(f'{toolchain}includes', config.includes[toolchain] or [])
ninja.variable(f'{toolchain}cflags', config.cflags[toolchain] or [])
ninja.variable(f'{toolchain}cxxflags', config.cxxflags[toolchain] or [])
ninja.variable(f'{toolchain}ldflags', config.ldflags[toolchain] or [])
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function gen refactored with the following changes:

Comment on lines -303 to +306
f = open('build.ninja', 'w')
ninja = Writer(f)
with open('build.ninja', 'w') as f:
ninja = Writer(f)

if hasattr(config, "register_toolchain"):
config.register_toolchain(ninja)

if hasattr(config, "register_toolchain"):
config.register_toolchain(ninja)

gen(ninja, config.toolchain, config)
f.close()
gen(ninja, config.toolchain, config)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function main refactored with the following changes:

@@ -41,6 +41,7 @@
same line, but it is far from perfect (in either direction).
"""

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 423-502 refactored with the following changes:

Comment on lines -517 to +535
matched = Search(r'\bNOLINT(NEXTLINE)?\b(\([^)]+\))?', raw_line)
if matched:
if matched.group(1):
suppressed_line = linenum + 1
else:
suppressed_line = linenum
category = matched.group(2)
if category in (None, '(*)'): # => "suppress all"
_error_suppressions.setdefault(None, set()).add(suppressed_line)
else:
if category.startswith('(') and category.endswith(')'):
category = category[1:-1]
if category in _ERROR_CATEGORIES:
_error_suppressions.setdefault(category, set()).add(suppressed_line)
elif category not in _LEGACY_ERROR_CATEGORIES:
error(filename, linenum, 'readability/nolint', 5,
'Unknown NOLINT error category: %s' % category)
if not (matched := Search(r'\bNOLINT(NEXTLINE)?\b(\([^)]+\))?', raw_line)):
return
suppressed_line = linenum + 1 if matched.group(1) else linenum
category = matched.group(2)
if category in (None, '(*)'): # => "suppress all"
_error_suppressions.setdefault(None, set()).add(suppressed_line)
elif category.startswith('(') and category.endswith(')'):
category = category[1:-1]
if category in _ERROR_CATEGORIES:
_error_suppressions.setdefault(category, set()).add(suppressed_line)
elif category not in _LEGACY_ERROR_CATEGORIES:
error(
filename,
linenum,
'readability/nolint',
5,
f'Unknown NOLINT error category: {category}',
)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function ParseNolintSuppressions refactored with the following changes:

Comment on lines -698 to +701
if (self._last_header > header_path and
Match(r'^\s*#\s*include\b', clean_lines.elided[linenum - 1])):
return False
return True
return self._last_header <= header_path or not Match(
r'^\s*#\s*include\b', clean_lines.elided[linenum - 1])
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _IncludeState.IsInAlphabeticalOrder refactored with the following changes:

Copy link
Copy Markdown
Author

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Type: Refactoring

Summary of PR: This PR introduces a series of refactoring changes made by the Sourcery tool, which include the use of f-strings for string formatting, simplification of conditional statements, and other Pythonic improvements for better readability and performance.

General PR suggestions

  • Ensure that the refactoring maintains the original intent and functionality of the code, especially where behavior may be altered, such as the regex pattern application in CleanseRawStrings.
  • Verify that the changes in indentation and formatting align with the project's style guidelines to maintain consistency throughout the codebase.
  • Review the use of f-strings and other modern Python features for compatibility with the project's Python version requirements.
  • Consider the readability of the code after refactoring, particularly when negations are introduced in conditional statements, and weigh this against the benefits of conciseness.
  • Confirm that the simplifications made by the tool do not introduce logical errors, especially in cases where the semantics of the code may change, such as with the use of any versus not all.

Your trial expires on December 18, 2023. Please email tim@sourcery.ai to continue using Sourcery ✨

# The allowed extensions for file names
# This is set by --extensions flag.
_valid_extensions = set(['cc', 'h', 'cpp', 'cu', 'cuh'])
_valid_extensions = {'cc', 'h', 'cpp', 'cu', 'cuh'}
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (llm): While using a set literal is more performant than calling the set constructor, consider if this change is necessary if the original codebase prefers a consistent style of set initialization.

Suggested change
_valid_extensions = {'cc', 'h', 'cpp', 'cu', 'cuh'}
_valid_extensions = set(['cc', 'h', 'cpp', 'cu', 'cuh'])

Match(r'^\s*#\s*include\b', clean_lines.elided[linenum - 1])):
return False
return True
return self._last_header <= header_path or not Match(
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (llm): The refactored condition is more concise but slightly less readable due to the negation of the match. Consider if the readability trade-off is worth the conciseness in this context.

Suggested change
return self._last_header <= header_path or not Match(
return not (self._last_header > header_path and Match(r'^\s*#\s*include\b', clean_lines.elided[linenum - 1]))

def NoExtension(self):
"""File has no source file extension."""
return '/'.join(self.Split()[0:2])
return '/'.join(self.Split()[:2])
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (llm): The change to use slicing with [:2] instead of [0:2] is a nice touch for brevity, but ensure it aligns with the code style guidelines of the project.

# of the line since this is probably a multiline string.
collapsed += elided
break
elif Search(r'\b(?:0[bBxX]?|[1-9])[0-9a-fA-F]*$', head):
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (llm): The refactoring within the CleanseRawStrings function seems to have introduced a bug. The regex pattern is now applied to head instead of tail, which changes the behavior of the function. This needs to be corrected to maintain the original logic.

Suggested change
elif Search(r'\b(?:0[bBxX]?|[1-9])[0-9a-fA-F]*$', head):
elif Search(r'\b(?:0[bBxX]?|[1-9])[0-9a-fA-F]*$', tail):

break

self.output.write(leading_space + text[0:space] + ' $\n')
self.output.write(leading_space + text[:space] + ' $\n')
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick (llm): The change to use slicing with [:space] instead of [0:space] is a nice touch for brevity, but ensure it aligns with the code style guidelines of the project.

# 50 => 0, 100 => 1, 200 => 2, 400 => 3, 800 => 4, 1600 => 5, ...
if error_level > 5:
error_level = 5
error_level = min(error_level, 5)
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

praise (llm): Good use of the min function to simplify the logic for capping the error level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants