This software is an exploration of .NET Aspire, using the technology to build and run a microservice architecture on local development environments.
From Aspire's documentation:
Aspire gives you a unified toolchain: launch and debug your entire app locally with one command, then deploy anywhere—Kubernetes, the cloud, or your own servers—using the same composition.
In addition to exploring Aspire, this solution also implements local language models using LM Studio and a MudBlazor UI.
Based on the strengths of both Aspire and Mudblazor, I decided to build a dashboard where each widget relies on one or more microservices to manage some aspect of family life.
- Create working Aspire solution.
- Create a set of services.
- Integrate LM running on local via OpenAI like API.
- Create a working MudBlazor UI.
- Implement orchestration services.
- Implement a "What's on?" widget
- Explore resilience policies with Polly.
- Deploy via the Aspire manifest file.
- Test the LM integrations using smaller models.
- Consider options for running an always-on dashboard (Raspberry Pi?).
- Design additional widgets.
- Implement a "Personal calendar" widget.
- Implement an "On this day" widget.
- Implement a "Daily word challenge" widget.
- Implement a "Bin days" widget.
No known defects.
GitHub Copilot was used to assist in the development of this software.
Note
VS Code Insiders does not appear to work with Aspire at this time, I will retry in the future and update documentation as necessary.
Warning
I am using the new SLNX format for my solution file. This requires version 9.0.200 of the .NET SDK.
For version 24.04 (LTS) of Ubuntu (which I am currently running on my laptop), the most up-to-date version of .NET 9.0 is accessed from the .NET backports registry and is 9.0.111 (as of writing).
A system capable of running LM Studio is required.
Details of my personal system are below.
Note
The hardware in use on my PC includes an Accelerated Processor Unit (APU) which combines CPU and GPU on a single chip. Recommendations for alternative hardware can be found here, performance will depend upon the models you choose to run (and other operational factors).
Configure LM Studio as per the documentation.
Download a model; you can use community leaderboards to help select an appropriate model.
Use the Developer tab to run your chosen model as an API server.
I previously noticed when using the Vulkan llama.cpp (Windows) runtime that some models were failing to load.
The Vulkan runtime is optimised for GPU offload, it is when the GPU offload setting was used that the models failed to load.
I was able to use the CPU llama.cpp (Windows) runtime, however this limits the inference to CPU resources only which significantly affects performance.
This issue was reported here.
As of 10 December 2025, this issue is now resolved on my computer.
Updating my system's BIOS, drivers, AMD Software: Adrenalin Edition and LM Studio fixed the issue.
The Vulkan llama.cpp (Windows) runtime now successfully opens the models.
I also now have access to an AMD specific ROCm llama.cpp (Windows) runtime which also successfully opens the models and supports GPU offload.
Docker needs to be running, no additional configuration is needed for the software to run the necessary containers.
Clone the repository.
Open in Visual Studio code.
Build the projects.
- The application gathers data regarding events currently being advertised in the local area.
- It passes the data to a local language model to summarise the events.
- The summary and the event data is then presented on the dashboard.
Start the application.
From the Aspire dashboard, load the frontend-web project.
The dashboard will begin to load the data:
Once the data is ready, the dashboard will update the view:
This repository was created primarily for my own exploration of the technologies involved.
I have selected an appropriate license using this tool.
This software is licensed under the MIT license.
More detailed information can be found in the documentation:

