Skip to content

Conversation

@mario-koddenbrock
Copy link

the check preds is None was not working, since pred was a tuple (None, None, None, None)

@MassEast
Copy link

MassEast commented Oct 9, 2025

Running into the same issue, which would be fixed by this pull request.

Traceback (most recent call last):                                                                                                                                                                                                                                                                                                                                     │
│   File "/app/scripts/benchmarking/run_benchmark.py", line 544, in <module>                                                                                                                                                                                                                                                                                             │
│     run_benchmark(                                                                                                                                                                                                                                                                                                                                                     │
│   File "/app/scripts/benchmarking/run_benchmark.py", line 382, in run_benchmark                                                                                                                                                                                                                                                                                        │
│     _generate_predictions(loaded_models, dataset_real, "real", results_dir)                                                                                                                                                                                                                                                                                            │
│   File "/app/scripts/benchmarking/run_benchmark.py", line 114, in _generate_predictions                                                                                                                                                                                                                                                                                │
│     pred_mask = model.predict(processed_image)                                                                                                                                                                                                                                                                                                                         │
│                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                                                                                                                                                         │
│   File "/app/mt/benchmark/models/cellsam.py", line 75, in predict                                                                                                                                                                                                                                                                                                      │
│     mask, _, _ = segment_cellular_image(                                                                                                                                                                                                                                                                                                                               │
│                  ^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                                                                                                                                                               │
│   File "/opt/conda/envs/synmt/lib/python3.11/site-packages/cellSAM/model.py", line 177, in segment_cellular_image                                                                                                                                                                                                                                                      │
│     mask = fill_holes_and_remove_small_masks(segmentation_predictions, min_size=25)                                                                                                                                                                                                                                                                                    │
│            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                                                                                                                                                                                                                                    │
│   File "/opt/conda/envs/synmt/lib/python3.11/site-packages/cellSAM/utils.py", line 299, in fill_holes_and_remove_small_masks                                                                                                                                                                                                                                           │
│     if masks.ndim > 3 or masks.ndim < 2:                                                                                                                                                                                                                                                                                                                               │
│        ^^^^^^^^^^                                                                                                                                                                                                                                                                                                                                                      │
│ AttributeError: 'NoneType' object has no attribute 'ndim'            

@MassEast
Copy link

Is there any hope on this simple but really necessary merge? It's such an obvious bug and fix which would help me make cellSAM run stable... Thanks!

@mario-koddenbrock
Copy link
Author

You could use my forked branch via:

pip install git+https://github.com/mario-koddenbrock/cellSAM.git

@anwai98
Copy link

anwai98 commented Nov 26, 2025

Hi @rossbar,

Any chance you could review this and take care of it?

It seems that the checking criterion (if preds is None) isn't suited for the model.predict return type, i.e. seems to be a tuple of four values to me.

Could you please merge the PR here? :) (seems like a critical bug to me when the model does not find any objects)

@rossbar
Copy link
Contributor

rossbar commented Dec 16, 2025

Yes this is obviously a bug - I have some trepidation about changing it without testing to ensure that it doesn't negatively affect the tiled workflow with cellsam_pipeline with use_wsi=True (the tiled workers handle the exception correctly).

Can you share a minimal reproducing example of how you hit this code branch so we can use it to design this test?

@anwai98
Copy link

anwai98 commented Dec 17, 2025

Hi @rossbar,

Thanks for your response. And first of all, congratulations on the publication! ;)

And I totally understand your anxiousness! (especially at the time of year we are rn haha)

Two points from my side:

  1. The change is super minimal. Here's my local clone changes that make it work ;)
[nim00007] nimanwai@glogin10 cellSAM $ git remote -v
origin  https://github.com/vanvalenlab/cellSAM (fetch)
origin  https://github.com/vanvalenlab/cellSAM (push)
[nim00007] nimanwai@glogin10 cellSAM $ git diff
diff --git a/cellSAM/model.py b/cellSAM/model.py
index b124581..5633335 100644
--- a/cellSAM/model.py
+++ b/cellSAM/model.py
@@ -163,7 +163,7 @@ def segment_cellular_image(
         model, img = model.to(device), img.to(device)
 
     preds = model.predict(img, boxes_per_heatmap=bounding_boxes)
-    if preds is None:
+    if all(p is None for p in preds):
         warn("No cells detected in the image.")
         return np.zeros(img.shape[1:], dtype=np.int32), None, None

The check is super simple - you always return tuples in model.predict, so treating tuple by validating with None checks creates the fuss. This update (and I think it aligns with the PR too) does the trick!

  1. For simple reproducilbility, let me send you an image (https://owncloud.gwdg.de/index.php/s/ntxhmBieoeGgnQM/download) where it breaks (example script below)
import imageio.v3 as imageio
from cellSAM import cellsam_pipeline

image =  imageio.imread("test.tif")
segmentation = cellsam_pipeline(image, use_wsi=False)

PS. I came across another issues just now: an array with all ones or all zeros breaks down too (ik ik it's not the way-to-go, but helps debug stuff if these two cases don't break, but rather return no segmentations)

e.g.

import numpy as np
from cellSAM import cellsam_pipeline

image = np.zeros((512, 512))
segmentation = cellsam_pipeline(image, use_wsi=False)

Let me know how it goes! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants