Scalability. And event store is append only, so there is very little contention for resources (maybe a page lock on an index during an insert). Therefore the commands can execute and persist very fast without having to wait for shared read locks to clear.
Once the events are appended to the event store, they are published asynchronously and are handled by separate threads or processes (possibly on different servers) to update their local read stores. These updates are the only writes that occur against the read stores, and they occur much less frequently than reads, so most read queries should be able to get shared read locks on the resources they need and execute very quickly.
The idea is that you divide your app into two (or more) data stores: one append-only event store to which you can write without worrying about read locks; and one or more read-only (*except for event updates) data stores that you can read from without waiting for exclusive update locks to clear. This mitigates blocking, deadlock, and other related issues that occur in normal RDBMS systems.
--- In email@example.com, "i_adore_serena" <serenarules@...> wrote:
> Pardon me for the intrusion into this thread Vernon, but I noticed the topic is focusing now on consistency, and it relates to some of the issues I have with event sourcing. You you mind giving me your take on my statements below?
> Eventual consistency is a major reason why I am having difficulties with event sourcing. While the overall concept is wonderful, I am a bit too pragmatic for it I think. At least, in the way the it is usually defined. Pre-event sourcing, in a command handler, once a method is called on a business object, I would always just save the object using a unit of work. This is imediate. So when I started looking at event sourcing, I had a hard time straying from this ideal. Once the business method was called, without generating exceptions, I want to immediately call send on the event bus to persist the changes I just validated and applied. Having done that, it makes sense to also immediately persist the actual events. Some people argue that this might cause sequencing errors, but I can't see how. If my business method caused two events to be generated, those events would be persisted in the same order the were generated. If another user makes additional changes, at the same time I do, but hits the save button first, their events will be persisted just before mine. Still no problem. Considering that I really don't like the idea of storing events in a queue, and that they are all being immediately persisted, whenever an object is requested, it is rebuilt completely from persisted events. So the next time I, or that other person, loads the object, we will see the results of both our sets of changes.
> So why is eventual consistency such a focal point with event sourcing? It seems to introduce more levels of complexity and points of failure than it solves.
> --- In firstname.lastname@example.org, Raoul Duke <raould@> wrote:
> > On Mon, Dec 5, 2011 at 8:41 PM, vvernon_shiftmethod
> > <vvernon@> wrote:
> > > I did. I commented on one example of when it works, and one example of when it doesn't work well. Perhaps there are ways to solve the problem in some domains, but it seems to greatly complicate things to force it. Eventual consistency works fine in a lot of cases.
> > as a slow-brained programmer, i kinda have a strong distaste for
> > eventual consistency. too damned hard to understand and debug and make
> > sure that the inconsistencies are the right allowed kinds vs. actual
> > serious bugs.