A domain-driven microscopy automation library built on an action-perception loop architecture, integrating with Bluesky and Micro-Manager for intelligent microscopy experiments.
-
Action-Perception Loop: Natural workflow that matches how microscopists work
- Atomic actions with state validation
- Incremental perception updates
- Adaptive decision making
- Real-time visualization
-
Composable Strategies
- Cell tracking
- Sample mapping
- Multi-channel acquisition
- Focus mapping
- Dynamic adaptation
-
Flexible Workflows
- Cell tracking experiments
- Tissue mapping
- Multi-modal imaging
- Adaptive protocols
- Clean hardware abstraction layer for microscope devices
- Bluesky integration for experiment tracking and data management
- Real-time visualization with quality metrics
- Built-in image processing and analysis
# Create and activate virtual environment
python -m pip install virtualenv
python -m virtualenv deepthought
source deepthought/bin/activate # On Windows: deepthought\Scripts\activate
# Install package
python -m pip install -U pip
python -m pip install -e .python -m pip install -e .from deepthought.microscopy_workflows import CellTrackingExperiment
from deepthought.microscope import ActionPerceptionMicroscope
from datetime import timedelta
# Initialize microscope
microscope = ActionPerceptionMicroscope(mmc) # assuming mmc is available
# Configure experiment
experiment = CellTrackingExperiment(
duration=timedelta(hours=1),
interval=timedelta(seconds=30),
channels={
"DAPI": 30, # ms exposure
"FITC": 200, # ms exposure
"TxRed": 200 # ms exposure
},
target_cell_type="cell",
min_cells=10
)
# Run experiment
results = await experiment.run(initial_state)from deepthought.microscopy_loop import ObservationStrategy, MicroscopeAction
class CustomStrategy(ObservationStrategy):
"""Example custom observation strategy"""
def next_action(self, perception):
# Make decisions based on current perception
if not perception.has_focus():
return AutoFocusAction()
if perception.needs_new_position():
return MoveStageTo(self.next_position())
return AcquireImageAction(self.current_channel())
def is_complete(self, perception):
return self.goals_achieved(perception)from deepthought.run import ActionPerceptionViewer
# Create viewer
viewer = ActionPerceptionViewer()
# Update callback
async def update_view(perception):
viewer.update_perception(perception)
# Shows:
# - Detected cells
# - Current field of view
# - Quality metrics
await asyncio.sleep(0.1)
# Run experiment with visualization
experiment.run(callback=update_view)Current version: 2.0.0-alpha.0
DeepThought follows Semantic Versioning with additional alpha/beta release designations:
- Version format:
MAJOR.MINOR.PATCH-RELEASE_TYPE.NUMBERMAJOR: Incompatible API changesMINOR: New features in a backward compatible mannerPATCH: Backward compatible bug fixesRELEASE_TYPE: alpha/beta/rc/finalNUMBER: Sub-version for alpha/beta releases (0, 1, 2, etc.)
Examples:
2.0.0-alpha.0: First alpha release of version 2.0.02.0.0-alpha.1: Second alpha release with improvements2.0.0-beta.0: First beta release2.0.0: Final release
deepthought/
├── microscopy_loop.py # Core action-perception loop
├── microscopy_strategies.py # Observation strategies
├── microscopy_workflows.py # High-level experiments
├── microscope.py # Hardware interface
├── observation.py # Perception management
├── biology.py # Biological entity models
└── run.py # Main entry point
The system operates on a continuous loop of:
- Observe: Gather data about the current state
- Perceive: Update understanding of the sample
- Decide: Choose next action based on current perception
- Act: Execute chosen action
- Validate: Ensure action completed successfully
Strategies are composable and can be combined for complex experiments:
strategy = CompositeStrategy([
FocusMapStrategy(positions),
MapSampleStrategy(center, size),
MultiChannelAcquisitionStrategy(channels)
])All actions and perceptions are logged to Bluesky's database:
- Experiment metadata
- Action history with parameters and results
- Perception state evolution
- Quality metrics
This project is licensed under the MIT License - see the LICENSE file for details.
For licensing inquiries, please contact: pskeshu@gmail.com