Discussion about this post

User's avatar
David Miller's avatar

Ralf,

The second context check still seems to suffer from a race condition:

Portal 1: transfer money from Alice to Bob

Portal 2: transfer money from Alice to Charlie

Command processor 1: Alice's and Bob's accounts look consistent

Command processor 2: Alice's and Charlie's accounts look consistent

CP 1: check again, still good

CP 2: check again, still good

CP 1: complete the transfer from Alice to Bob

CP 2: complete the transfer from Alice to Charlie

But now Alice is overdrawn.

Maybe as a first step, even before the initial query, the command processor issues a "request to transfer" event, and then, during "check again", they check for new events after their own "request to transfer", and also check for other, active "request to transfer" events.

Expand full comment
Alisher's avatar

Hey, great article and an interesting concept, but I have some practical questions about this approach.

Assuming that there’s a single “event stream” (append only log), it would render the querying step basically into a O(n), that grows with each new event appended.

It must be run every time we call a command and not only that, but we need to run this twice – second time before writing new events to make sure we don’t have a stale model.

If we were to introduce basic optimization used in ES – namely, snapshots, we would get another problem: in old Aggregate approach we had single model per each entity. In the new approach we must store Models per each Command Context and per each entity (e.g. account/device) which is very inconvenient.

How would you tackle this?

Expand full comment
5 more comments...

No posts