I am looking at Akka as I would like to use event sourcing for some particular microservices in my system (we are migrating an older space-based architecture to an event based microservice architecture). My challenge is that I have high volumes (always increasing, ~1.5bn entities a day) of fairly short lived (1-10 days) entities which receive only a few commands and thus events, and I need to process those 1.5 bn inside of maybe 4 hours (actually spread through day but with significant spikes).
I have been looking at how to accomplish on a few event sourcing oriented platforms - trying to find a good fit platform that I can adopt - Akka, Axon, EventStore, KStreams.
To enable adequate performance on reasonable resources my belief is that I need to be able to ‘archive’ the events for ‘retired’ entities off to another longer term read only store (most likely as ‘history’ documents in a Doc DB). I would expect maybe I could do that by building that store off the original events, but then delete the entity and its events from Akka at some appropriate point.
So my major challenge looking at Akka is this archiving question.This seems to me (at least from my background area of finance) to be a fairly typical requirement so I am surprised to see that there is no direct support in any of these platforms so far (hopefully I am wrong). If you receive a large number of new aggregates to process every day and you want good performance (and especially if you want reasonable resource requirements) and if the aggregates have a clear limit to their active life in the system then you want to leverage that to keep your active event store lean and offload old but still interesting for audit (i.e. read only) etc data to a separate store.
A typical deployment of our system will be (on premise) ~1.5 bn new aggregates per day, to be processed in around 1-10 days, and then retained for 10 years. I don’t see how that would be viable without such an approach.
4 posts - 2 participants