The case for Event Sourcing

Event sourcing replaces brittle CRUD systems with a flexible, future-proof approach that stores every action as an event, making business logic transparent and adaptable to change.

Why CRUD API-development grinds to a halt for the gain of zero benefits.

The Shopping Cart That Grew

Picture this: someone arrives with requirements for a system. Technical or not, they're convinced their general idea is right, and future changes will be minor tweaks. The canonical example: "I want people to browse the site and add/remove items to their shopping cart."

Simple enough. You have a catalogue of products, a table of users, so you add a `ShoppingCart` resource - just a bundle of product IDs owned by a user. This works perfectly in production. Months go by.

Then someone from business convinces the decision makers: "If we track what people put in their cart and remove again but still buy the rest, we can target them with emails - 'Still looking for a vacuum cleaner? Here are some more options' - personalized discounts, the works."

"So when can we start sending those emails?"

"Well, we don't have that data."

"From now on then? But the targetable customers from the last months are lost?"

"No, we need to add new tables first: `cart_item_lifecycle` with user_id, item_id, added_timestamp, removed_timestamp, purchased_on_same_day..."

This schema will never be "complete" - you can add infinite twists to how your data gets presented. After implementing this new behavior tracking, you discover that assumptions cascade: "this change affects that table, but only in special cases, which actually need another table altogether..."

Unless you separate the "resulting view" from "the actions as they happened," you're eternally playing catch-up with spaghetti code.

The Alternative Timeline

What if instead you started with events:

- `shop_action_navigate_page` (who, when, from_page)

- `shop_action_add_item_to_cart` (screen location, timestamp, which item)

- `shop_action_remove_item` (the 5 W's again - you see the pattern)

- `shop_action_checkout`

You pay a large upfront cost just to answer "what's in their current cart?" But when sales wants to find fence-sitter customers next month, you just add a sibling to your `current_shopping_cart_of_user` projection:

for event := range eventsOfUser {   
	switch event.Type {   
	case "put_in_cart":       
		items.Add(event.ItemID)   
	case "remove_from_cart":       
		items.Remove(event.ItemID)       
		abandonedItems.Track(event.ItemID, event.Timestamp)   
	case "checkout":       
		// Clear current cart, trigger purchase logic   
	}
}

One loop, zero spaghetti. No existing table relationships to navigate, no schema migrations. Just the idea for a projection and the iterator to implement it.

Because when they come back next week saying "Great analysis! We made money from those CTAs, but don't include put-backs within 15 seconds - that's accidental clicks, not actual interest" - it starts all over again with CRUD, but with events it's just adding a time threshold check:

case "remove_from_cart":   
	timeSincePut := event.Timestamp.Sub(lastPutTimestamp[event.ItemID])   
	if timeSincePut > 15*time.Second {
		abandonedItems.Track(event.ItemID, event.Timestamp)
}

The same pattern applies to any new requirement. Each projection is self-contained. Simply think of a new idea and jot down some notes.

So Why Doesn't Everybody Do This Already?

"Writing such a system from scratch is more complex than 'just work on a db row'":

True, but this complexity is front-loaded and pays dividends. Plus, you don't have to choose one or the other - start writing new features with an event store while treating existing data as special events. I've taken old tables, renamed columns, and basically called them "manual_entry_xyz" events, with newer events being proper event sourcing, not "CRUD sourcing." Having an event store doesn't prohibit handling HTTP traffic normally.

Event ordering complexity: Yes, "in which sequence should events get applied?" For systems where users can "retroactively change the timeline," complexity increases beyond "SELECT * FROM events ORDER BY timestamp ASC". But most business domains have natural temporal ordering.

Performance concerns: "Event log only ever increasing" becomes problematic unless you memoize your projection builder. Each projection can be defined as (last_version + next_event). Store the last version and up to which event it integrated.

Maintenance overhead: If nobody uses your product, then nobody will want changes. All software with 1+ users is ever-changing. The question is whether you want to work on the software or use it.

"Such a huge upfront cost"

Actually, there's minimal complexity cost to saving rich event streams, while their potential for retroactive analysis is extremely vast. Front-loaded design work transforms most development into writing focused projections with clear scopes.

This architectural approach enables a crucial advantage: involving business people directly in development.

Both automated testing and sanity checks can be represented as: "Consider this timeline, where these events happened: [list]. When those are added, how does the whole situation change?" This lets developers delegate business logic validation to domain experts, since they can comprehend and validate the logic without getting lost in implementation syntax.

Here's a real example showing how event sourcing makes business logic transparent. I took production code and had an LLM generate pseudocode "for a domain expert who doesn't program". This lets stakeholders validate logic without getting lost in implementation details:

Inventory contains: 
- Items: map of item_id -> InventoryItem‍ 

InventoryItem contains: 
- Id: number 
- Title: text 
- Homeostatic: number (target stock level) 
- Stocked: number (current stock) 
- ConsumptionHistory: list of consumption periods 
- PreviousUpdate: timestamp‍ 

Event Processing:   
inventory_create_item:     
item = new InventoryItem {       
	Id = event.ID       
	Homeostatic = 1       
	Stocked = 0       
	Title = data.Title       
	ConsumptionHistory = empty list       
	PreviousUpdate = event timestamp     
}     
Items[item.Id] = item‍   

inventory_delete_item:     
	remove Items[data.Item]‍   

inventory_set_homeostatic:     
	item = Items[data.Item]     
	if item not found: error     
	item.Homeostatic = data.Homeostatic‍   

inventory_update_stock:     
	item = Items[data.Item]     
	if item not found: error‍     

delta = item.Stocked - data.Stocked     
time_elapsed = event_timestamp - item.PreviousUpdate     
consumption_rate = delta / time_elapsed_hours‍     

add to item.ConsumptionHistory: {       
	Delta: delta       
	Over: time_elapsed       
	End: event_timestamp       
	Rate: consumption_rate     
}‍     

item.Stocked = data.Stocked     
item.PreviousUpdate = event_timestamp     
recalculate when item will be empty‍   

inventory_update_title:     
	item = Items[data.Item]     
	if item not found: error     
	item.Title = data.Title

Business stakeholders can read this format, spot edge cases, and validate the logic directly. The bugs shift from "why didn't you think of this obvious thing" to "our expert signed off on this being sensible."

Beyond stakeholder collaboration, this approach simplifies API design. **You can stop worrying about** crafting the perfect REST endpoints or "elegant" abstractions. Instead, capture everything when events happen and query what you need through projections.

The Performance Sweet Spot

Repeatedly calculating projections is the perfect application for memoization. The "stay the same if nothing relevant to you changed" can oftentimes simply be handled via: "cache of projection X is valid until eventId 999, but we are now at 1007. Go through the new ones, if none are relevant, update from 999 to 1007 and we're done."

So we can actually get the lookup speed of tables - perhaps even more, since it's realistic to have the projections you want/need just as they are. If you have a kitchen sink of different tables all older than 5 years, you won't be able to just "spit back the db query more or less."

When It's Worth The Trade-off

When can you afford not to?

There is no tradeoff. There exist battle-tested event sourcing systems readily available, and writing one in-house is done once, you can nerd-snipe your developers into writing it over the weekend if you let them.

When changing "a simple crud system", the business logic leaks and contaminates technical limitations on the regular. Event Sourcing lets you "just do it".

The conceptual shift is simple: store the *why*, not just the *what*. Events as immutable facts versus mutable state. When explaining to non-technical stakeholders: "So then what happens?" becomes a question you can actually answer with confidence, because you have the complete story of how you got to any given state.

This article sets up the conceptual foundation. The follow-up will show exactly how this looks in daily development - the actual usage, feel, and implementation patterns that make event sourcing practical rather than theoretical.

About the Author

Alessandro is a technical mastermind and Chief Technology Officer at Iridium Works. Over the years he has build countless systems working with Front- and BackEnd, DevOps and as a Tech Lead. He writes about new technology, software development.

Alessandro Frank
CTO
at Iridium Works
📍
Koblenz, Germany
🔗
Full Biogrpahy
🔗
LinkedIn Profile
Let's build your digital future, together.
We build digital experiences for pioneers that want to challenge the status quo so that they can rise to the top of their competitive landscape.
Text reading 'Iridium Works' with a blue marbled texture fill on a transparent background.
Black and white close-up portrait of a man with a bald head, full beard, and checkered shirt looking directly at the camera.
Portrait of a woman with long dark hair, wearing black glasses, a black blazer, and a light gray top, against a plain gray background.
Smiling bald man with a beard wearing a white dress shirt with his arms crossed, standing against a dark blue textured wall.
Smiling man wearing glasses, a navy blazer, white shirt, and jeans, sitting on a wooden stool against a plain background.
Young man with glasses, beige zip-up sweater, white shirt, and gray pants sitting on a wooden stool against a light gray background.
© Iridium Works GmbH. All rights reserved.
Welcome to digital excellence.