this is a custom hobby project for myself, nothing special
it is called Talepreter, an interpreter for tales which could be used for other purposes, such as movie making or other scenario required objects (books and novels mainly).
it is not entire code, and it is expected to fail to build, on purpose, should require some secret assemblies from private nuget repo in github. but rest is as is.
it also has a WPF based GUI but not here due to some reasons. maybe it will also be added later
this is new version of former project (https://github.com/gungorenu/talepreter-public). it is redesigned on some parts as the techstack would fit better. the notion of learning the tech is fulfilled so I can use the best decisions now. many parts are different compared to former version, some shortcuts removed, some design problems solved. the main difference is focusing more on Mongodb and less EntityFramework, entities are strongly types as plugin system is removed. another difference is preparing for a web GUI. right now it is available with old WPF and new Web client (it reads data from Mongodb directly so it works almost with %98 same code). Performance looks better although still not within my expectations. it is still an ongoing project anyway.
see end of document about future versions because this is not the last version
Talepreter is a helper app to summarize what has happened in the tale, especially useful after maybe hundreds of pages. it stores information about actors, anecdotes (which could be anything), NPCs (non-actors), settlements and some world information. when I am writing next page, I need to know what has happened before, and if tale is very long then sometimes important information might be skipped. it would create inconsistencies if writer does not follow what has been mentioned before in former pages.
Ex: if actor X does not smoke (maybe never done in his life), in future pages tale should not mention that "X smokes a cigarette". or similarly if actor Y does not have a driving license, in future pages tale should not mention him driving a car (for sure there might be exceptions like he is driving without a license, escaping from bad guys etc.); consistency would be broken for such cases. maybe a settlement had a landslide some time ago and many things changed there, so in new pages the former information must be considered. if tale is long (like telling a tale of hundred years) then some people shall pass away eventually. it would be weird to mention actor X never ages and his mother is still living beyond limits of human life (unless the world has different rules).
this app will help (not fix things automatically) writer on such topics given above. writer can check for many details before writing the next page for the tale. at least it can look at Talepreter views to think and focus on real content instead of digging continuously to check if something is broken. for my own hobby, app does many other things too (many calculations) which I do not want to bother with. this does not mean it is limited to such features only but a starter for now.
pure new stuff if possible, also some of them are new for me. I am a developer, not devops guy, so as long as stuff works, that is good enough.
- uses netcore 8 mainly
- uses Orleans, RabbitMQ, EntityFramework, MongoDB, Rest API
- orchestration shall be basic docker-compose but maybe kubernetes yml will be added later. everything is in container, including web interface (except WPF GUI)
- DB is SqlServer (MSSQL) and MongoDB, uses both in docker
GUI is both an Angular (MEAN stack) web and WPF targeting windows. Talepreter runs in docker linux containers (5+2 service plus infrastructure for now) and GUI will be at windows due to some hard requirements related to filesystem.
this is a hobby project, most of the things are done on purpose to be faster developed. if it is working good enough, then I am happy enough. if it works on my machine that means it is done.
some parts of code may not be visible. this is on purpose.
my content (content of a tale, scenario for example) is always in a file system supported by a source control, so losing entire data in Talepreter is very normal (part of an action I do perhaps daily, not about development). the entire data will be read from file system and regenerated again (and again and again), which might be a little weird compared to other apps in other businesses.
when I write a tale, I continuously go back and add stuff to former pages. from an author perspective, it is very weird. a completed page must not be touched, but I do because I am not a good writer. this makes me forced to build views again and again, sometimes disconnect parts of the tale to rewrite some time later. it means a (even first) page is never complete, even after I am writing thousandth page. next day I might go back to a former page, change it, and have to see new results. this means entire view data must be regenerated again. that is the main use case for the app.
some data modeling is weird but it is done on purpose. the content of a tale is in text and the notes coming from a content must be humanly readable/writable (even json is very unreadable for that), so there are many generic rules to support all kinds of notes (I call them page command). it is not a good idea for a system like that perhaps but again the main purpose is my own usage and I write tales in markdown, so design of app is very affected by that too, simple text processing mainly.
there are some places where the application does weird things but also I could not find a better way. the source of commands in a tale is single and same for every service. so same command will go to four services to be processed. problem starts there, not every service is interested in those commands, not all. as a designer of each service I know which commands will be executed where but still response system has to be fast and not blocking controller grains.
there are some parts which I have concerns about, performance of grains and being blocked by common DB. in my previous application everything was in memory but also process was done in single thread. since it was single application, I did not need to have a concept of communication. now with multiple services, there is a concept and furthermore these worker entities are orleans actors, which means they will be blocked down to single thread sometimes. I have taken some shortcuts about validation to make things faster already. full publish operation in old application takes up to 3 seconds in my sample tale of 30 chapters/660 pages/7k page commands. that means 7k x2 message will be processed during a single publish operation. same grains will be called many times over things to update (about progress) and response handling. due to this x3 message handling I focused on progress response handling, like how much is done within a second, maybe how long a full publish takes.
there could be other ways to handle this too, but I chose that way to see if it is that bad or negligible. one of the open issue is writing (upload operation) page commands, they go directly into services due to single tale is affected by this, and chapter/page grains are only used to process/execute responses, not write. the very obvious solution would be not to upload everything again but the problem comes from changes I do naturally. I change past of the pages so most operations (process/execute) has to be done again. that direction (changing what is needed only) will be next version I will implement, and maybe go with full actor model depending on performance.
I have been working on it again and this time with more focus on my stuff than presenting it. its design changed a little drastically. stuff runs different now but does the same thing. biggest difference is there will not be a public repo anymore. the changes in v3:
- Not Public: most stuff is in correct place now so many services include domain stuff right there instead of abstracted assemblies (which I used to make code public). also I have learned enough and there is no need to try weird things (for the purpose of learning) anymore.
- Kubernetes: services all run in kubernetes because running inside docker is fine but has shortcomings (read further, Orleans IP address!). cluster has around 4+ jobs (db migrations), 1 cronjob (cleanup), 3 services (one of them can scale up but gives me no benefit beyond 3-4, has around same performance) plus all infrastructure services (mongodb, postgres, redis and rabbitmq) inside single cluster. I have 3+1 kubernetes files (+1 is for web frontend which is in another repo). there is also one issue Kubernetes solved, what docker setup had shortcoming. Orleans svcs (silos) need hardcoded IP addresses so they coordinate with each other. if the host machine had dynamic IP then docker setup fails because services would fail to access each other when dynamic IP changes (the svcs would need to be recreated again). it would prevent scaling up as well. they could scale up but using different ports, a big mess. kubernetes solves it perfectly. svcs in cluster has their own IP addresses and they never require anything from outside. not a big thing either but only MongoDb and RabbitMq are needed by GUI/Web frontend. so just exposing them is enough, and other services likes postgres can run on internal ports without affecting outside cluster.
- Performance: biggest benefit comes from (distributed) Redis, now stuff is cached so there is almost very little/no reason to go to database (mongodb) again and again. it made things very fast. in a decent and new machine I tested the solution. scenario from old design to new became like 15min >> 1min. it is huge. I believe real trick was to cache right stuff
- Coordination: hard to describe what it is. in former design operations were divided into 4 categories (one for each svc) and these operations had to be coordinated (they had to wait for final one to continue next page for example). now everything is merged in a way, commands do not have a category like that anymore, everything is owned by one coordinator. this reduced the coordination overhead to minimum since it is one centralized coordinator. also duplicated operations in each service are cut as well. unfortunately some functions became monolithic but it is the trade off.
- New Services: actually redesign/merge and separated operations of old services. in former design there were 4 workers doing separate things plus some shared stuff. in new design there is a single coordinator which calls grains in coordination and grains are in one service (was 4, this is the one scaled up). coordinator svc has little job and grains are super fast in powerful machine and can scale up. frontend and tale svc are same, they do simple stuff
- Shared DB: a little trade off compared to old design. in old design each svc (had 4) had mock objects to store data of stuff they were not fully interested in. Actor and Person svc are interested in Race and they had their own mock objects for Race info. there were many other things as well, and sometimes it was even more annoying (Anecdote svc needed to know actor names, just a string array but the only way was to listen all Actor commands and then build up Actor name as a collection, overall it would impact performance because the svc had to perform something for each and every Actor command, which are many). now it is single MongoDB database/table so instead of accessing mock objects they access real objects, and as guessed, they are cached. it makes things faster and also robust and direct. a little away from microservice arch for more performance
- Mongodb Redesign: in old design every publish had its own table, it was very simple to clone a publish, just clone the table (cloning a publish is very common scenario so database design was focused on it as well, a tech debt though). in new design objects are in their own table as in a proper database design, like Actors are in Actor table. this change was more logical but the problem was each table could have objects from multiple tales/publishes. design had to change a little to cover this but with right change it works perfect now without interfering with each other. sadly some queries (on the web side especially) became immense big aggregates (some pipelines have 7+ stages) but that is the challenge and fun part.
- Spin Up: this is partially problem of my Docker setup (plus its shortcomings and limitations about Orleans and IP addresses). now with correct k8 files it can be spin up with 2-3 commands. all kubernetes files are prepared. only secrets have to be set up once and ready to go. still there are minor manual steps but minimum
- Cleanup: mainly about abandoned/faulty/orphan data. now a cronjob gathers and collects (deletes) them. this abandoned data actually should not occur, it is due to code errors. ex: purging a publish gets an error, then the tale would not be able to see old publishes anymore. those publish data is orphaned now (whatever remains). now even with code errors abandoned/orphan data will be collected properly by the cronjob. another (design) cleanup is grains. some grains made no sense and having hard dependency on grains slowed the system. in old design Tale > Publish > Chapter > Page were all separate grains and has storage. now only Tale > Publish, and that is working as same. no Chapter or Page grains are needed and operations do same thing. less need to go to db or Orleans silos.
- Postgres: bye SqlServer, welcome Postgres. sql db use was a simple thing (also learning help for EF) and used temporarily (in the scenario) as well so a smaller DB is always better, could be even in memory (SQL took GBs while Postgres uses around 100MB only as memory for same scenario)
- More Features: there are some more features which are mainly tale connectivity which required DB changes but also some coordinator design done. I have plans to connect tales (for example one tale is prequel and other is sequel) and those require data copying. it required some hints about what sequel should have at beginning (it is not a brand new tale actually so some stuff have to be present). the new design allows some stuff clearly and easily. new work tasks can be added easily (the coordinator thingy)
overall I am satisfied and from now on will be simple changes.