Sunday 30 September 2012

Eventual Consistency is a usability concept

When Google introduced Spanner, the big news was NewSQL: the availability of general purpose transactions. In last years we have seen movement in a different direction: Basically, that transactions are not needed in many cases and developers should handle them on a highler layer. Often, the main argument used was a concept coined Eventual Consistency. In short, this concept was said to be that consistency is sufficiently reached if (in a distributed environment) all data is consistent at some point in time. The actual point could depend highly on the implementation of the storage subsystem (the database).

In his original post on Eventual Consistency, Werner Vogels already had in mind a much broader notion:
Many times the application is capable of handling the eventual consistency guarantees [...] user-perceived consistency. In this scenario the inconsistency window needs to be smaller than the time expected for the customer to return for the next page load. 
Greg Young put this into context when he showed the relativly small impact of server-side consistency in comparison to stale data along other transport layers to the client. Hence, I believe the point in Werner Vogels' paper never was that consistency is irrelevant, or a database does not have to be consistent, rather the point was that, across system boundaries, consistency needs to be in sync with the user expectation.

Eventual Consistency is a usability concept.

Take a look at CQRS for instance, the translation of the concept into a broader architectural context. You need Eventual Consistency in order to satisfy the CAP theorem, yet this does not mean in any way that your database cannot be transaction-safe. On the other hand, you can build an eventually consistent system and UI on top of a perfectly transaction-safe database like Spanner. Responsive design often means asynchronous design, regardless of the services or storage you use. Consistency is not a requirement database engineers have to solve, it's a user experience requirement.

Sunday 9 September 2012

Map vs. the Landscape

Architects think in maps. Unlike real-world building architects, though, most IT architects cannot just walk to the construction site and get a feeling of the environment. Plus, the time is shorter to cover the spatial dimension of the system. In a time, where high speed links become standard and our perception of the state of a system is like "a star that burned out 50.000 years ago" IT architects need to fly high to cover the distance, even quite literally if they need to coordinate worldwide distributed systems, development teams and clients.

The maps we currently have of systems are not capable of showing the right dimensions. We might even say that UML only covers the obvious parts, the IDE-supported parts, the fine-granular building blueprint instead of the actual, visionary architecture. Mapping functional, parallel, non-structured elements in UML is possible but loses any visually helpful information in the process.  If we look at the interesting systems, the large systems, the old proverb  "The most alluring part of a map is that which is unmappable" becomes true again.

20 years ago the Agile Manifest was the spearhead of a movement that banned documentation as chronically outdated. 20 years later, with emergent, fault-tolerant architectures we need to be able to look at the runtime state of a system, judging its functional and technical change over time, it's clients and connected systems, rather than arranging the building blocks upfront. Process Mining, Architecture Integrity Control (and adapt) and Continuous Refactoring become crucial. Evaluation becomes the key step of an architecture, not design. The architect becomes an explorer with an idea, rather than a supervisor with a plan.

How could we map this changing landscape?

And I expected it to be wonderful - it was.
I expected the world to be sad - and it was.
I just didn't expect it to be so big.
XKCD/1110