
Alessandro Frank
CTO
Event sourcing replaces brittle CRUD systems with a flexible, future-proof approach that stores every action as an event, making business logic transparent and adaptable to change.

Picture this: someone arrives with requirements for a system. Technical or not, they're convinced their general idea is right, and future changes will be minor tweaks. The canonical example: "I want people to browse the site and add/remove items to their shopping cart."
Simple enough. You have a catalogue of products, a table of users, so you add a `ShoppingCart` resource - just a bundle of product IDs owned by a user. This works perfectly in production. Months go by.
Then someone from business convinces the decision makers: "If we track what people put in their cart and remove again but still buy the rest, we can target them with emails - 'Still looking for a vacuum cleaner? Here are some more options' - personalized discounts, the works."
"So when can we start sending those emails?"
"Well, we don't have that data."
"From now on then? But the targetable customers from the last months are lost?"
"No, we need to add new tables first: `cart_item_lifecycle` with user_id, item_id, added_timestamp, removed_timestamp, purchased_on_same_day..."
This schema will never be "complete" - you can add infinite twists to how your data gets presented. After implementing this new behavior tracking, you discover that assumptions cascade: "this change affects that table, but only in special cases, which actually need another table altogether..."
Unless you separate the "resulting view" from "the actions as they happened," you're eternally playing catch-up with spaghetti code.
What if instead you started with events:
- `shop_action_navigate_page` (who, when, from_page)
- `shop_action_add_item_to_cart` (screen location, timestamp, which item)
- `shop_action_remove_item` (the 5 W's again - you see the pattern)
- `shop_action_checkout`
You pay a large upfront cost just to answer "what's in their current cart?" But when sales wants to find fence-sitter customers next month, you just add a sibling to your `current_shopping_cart_of_user` projection:
for event := range eventsOfUser {
switch event.Type {
case "put_in_cart":
items.Add(event.ItemID)
case "remove_from_cart":
items.Remove(event.ItemID)
abandonedItems.Track(event.ItemID, event.Timestamp)
case "checkout":
// Clear current cart, trigger purchase logic
}
}One loop, zero spaghetti. No existing table relationships to navigate, no schema migrations. Just the idea for a projection and the iterator to implement it.
Because when they come back next week saying "Great analysis! We made money from those CTAs, but don't include put-backs within 15 seconds - that's accidental clicks, not actual interest" - it starts all over again with CRUD, but with events it's just adding a time threshold check:
case "remove_from_cart":
timeSincePut := event.Timestamp.Sub(lastPutTimestamp[event.ItemID])
if timeSincePut > 15*time.Second {
abandonedItems.Track(event.ItemID, event.Timestamp)
}The same pattern applies to any new requirement. Each projection is self-contained. Simply think of a new idea and jot down some notes.
"Writing such a system from scratch is more complex than 'just work on a db row'":
True, but this complexity is front-loaded and pays dividends. Plus, you don't have to choose one or the other - start writing new features with an event store while treating existing data as special events. I've taken old tables, renamed columns, and basically called them "manual_entry_xyz" events, with newer events being proper event sourcing, not "CRUD sourcing." Having an event store doesn't prohibit handling HTTP traffic normally.
Event ordering complexity: Yes, "in which sequence should events get applied?" For systems where users can "retroactively change the timeline," complexity increases beyond "SELECT * FROM events ORDER BY timestamp ASC". But most business domains have natural temporal ordering.
Performance concerns: "Event log only ever increasing" becomes problematic unless you memoize your projection builder. Each projection can be defined as (last_version + next_event). Store the last version and up to which event it integrated.
Maintenance overhead: If nobody uses your product, then nobody will want changes. All software with 1+ users is ever-changing. The question is whether you want to work on the software or use it.
"Such a huge upfront cost"
Actually, there's minimal complexity cost to saving rich event streams, while their potential for retroactive analysis is extremely vast. Front-loaded design work transforms most development into writing focused projections with clear scopes.
This architectural approach enables a crucial advantage: involving business people directly in development.
Both automated testing and sanity checks can be represented as: "Consider this timeline, where these events happened: [list]. When those are added, how does the whole situation change?" This lets developers delegate business logic validation to domain experts, since they can comprehend and validate the logic without getting lost in implementation syntax.
Here's a real example showing how event sourcing makes business logic transparent. I took production code and had an LLM generate pseudocode "for a domain expert who doesn't program". This lets stakeholders validate logic without getting lost in implementation details:
Inventory contains:
- Items: map of item_id -> InventoryItem
InventoryItem contains:
- Id: number
- Title: text
- Homeostatic: number (target stock level)
- Stocked: number (current stock)
- ConsumptionHistory: list of consumption periods
- PreviousUpdate: timestamp
Event Processing:
inventory_create_item:
item = new InventoryItem {
Id = event.ID
Homeostatic = 1
Stocked = 0
Title = data.Title
ConsumptionHistory = empty list
PreviousUpdate = event timestamp
}
Items[item.Id] = item
inventory_delete_item:
remove Items[data.Item]
inventory_set_homeostatic:
item = Items[data.Item]
if item not found: error
item.Homeostatic = data.Homeostatic
inventory_update_stock:
item = Items[data.Item]
if item not found: error
delta = item.Stocked - data.Stocked
time_elapsed = event_timestamp - item.PreviousUpdate
consumption_rate = delta / time_elapsed_hours
add to item.ConsumptionHistory: {
Delta: delta
Over: time_elapsed
End: event_timestamp
Rate: consumption_rate
}
item.Stocked = data.Stocked
item.PreviousUpdate = event_timestamp
recalculate when item will be empty
inventory_update_title:
item = Items[data.Item]
if item not found: error
item.Title = data.TitleBusiness stakeholders can read this format, spot edge cases, and validate the logic directly. The bugs shift from "why didn't you think of this obvious thing" to "our expert signed off on this being sensible."
Beyond stakeholder collaboration, this approach simplifies API design. **You can stop worrying about** crafting the perfect REST endpoints or "elegant" abstractions. Instead, capture everything when events happen and query what you need through projections.
Repeatedly calculating projections is the perfect application for memoization. The "stay the same if nothing relevant to you changed" can oftentimes simply be handled via: "cache of projection X is valid until eventId 999, but we are now at 1007. Go through the new ones, if none are relevant, update from 999 to 1007 and we're done."
So we can actually get the lookup speed of tables - perhaps even more, since it's realistic to have the projections you want/need just as they are. If you have a kitchen sink of different tables all older than 5 years, you won't be able to just "spit back the db query more or less."
When can you afford not to?
There is no tradeoff. There exist battle-tested event sourcing systems readily available, and writing one in-house is done once, you can nerd-snipe your developers into writing it over the weekend if you let them.
When changing "a simple crud system", the business logic leaks and contaminates technical limitations on the regular. Event Sourcing lets you "just do it".
The conceptual shift is simple: store the *why*, not just the *what*. Events as immutable facts versus mutable state. When explaining to non-technical stakeholders: "So then what happens?" becomes a question you can actually answer with confidence, because you have the complete story of how you got to any given state.
This article sets up the conceptual foundation. The follow-up will show exactly how this looks in daily development - the actual usage, feel, and implementation patterns that make event sourcing practical rather than theoretical.
Alessandro is a technical mastermind and Chief Technology Officer at Iridium Works. Over the years he has build countless systems working with Front- and BackEnd, DevOps and as a Tech Lead. He writes about new technology, software development.
Access our exclusive whitepapers, expert webinars, and in-depth articles on the latest breakthroughs and strategic implications of webdesign, software development and AI.