Saturday, 22 December 2012

Spaceship Earth


There is the (correct or not) story Steve Jobs choosing the personalities to appear in the „Think Different“ campaign. One of the “crazy ones” to appear is Richard Buckminster Fuller, a person Steve Jobs was particularly fond of. Both, with Buckminster Fuller’s Operating Manual For Spaceship Earth being published at the same time, admired the “Whole Earth Catalog”, a book Jobs’ took his famous quote “Stay hungy, stay foolish” from. Their admiration came from the trust in self-regulating systems, more specifically the belief that a better society could be built on the foundations of technology. This view was shared not only by them but similarly in economics and urbanism, which had at the same time discovered cybernetics and were busy weaving everything into Gaia theory.


via Andrew Reynolds


Emergence touches some aspects of self-organization and maybe even Gaia theory. The difference is, though, that emergence does not assume a stable state but rather “Black Swans” – Disruptions, it embraces change.

You can illustrate the difference if you look at the iOS and Android (or Windows) ecosystems. iOS is like “intelligent design”, it creates a sophisticated, stable state by aristocracy. An invisible hand shapes the ecosystem. But Adam Curtis uses the picture of Buckminster Fuller and Hippie communes to show that obedience to this stability in favour of egalitarianism is not always best. Actually, making everything the same is industrializing it and thus, maximizing differences. Even real world architecture accepts that it cannot be eternal without authority. Android is the other way round. It’s chaotic, nudged by an evil force, Google. While this force is always tempted to control the system, it realized that innovation can only happen from within. While the system is less robust, more fragile, it is also quicker to adapt. 7-inch tablets, wrist watches and Lego Mindstorms can only happen with Android.

IT Systems Architecture is quite similar. Of course it’s stupid to start from scratch every time you build a new system. But it’s as limiting to have an architecture department dictate every change. Focault showed us that individuals which are exposed to a system will always try to use their power, turning the system into a machine. Hence, industrializing it and stopping innovation. Allowing to stay on the edge by managing properties at the same time is what systems architecture is about. Looking at nature and embracing emergence in order to achieve resilience is a virtue. But narrowing down natural organizations to a randomly chosen system, calling it perfect just to be able to cut everything else down is a tragic. Regardless whether you call it Mechanisazion, Singularity, Ephemeralization or Machines of Loving Grace.


As Jez Humble writes in his review of Nassim Taleb’s Antifragile: “The problem with robust organizations is that they resist change. They aren’t quickly killed by changes to their environment, but they don’t adapt to them either – they die slowly”. Or, as Adam Curtis would say “they are stopping people who want to change the world. They’re actually even stopping people from having the framework to think that they could change the world”.

Sunday, 9 December 2012

RfC: Decoupled Client Architectures

Preparing a talk about decoupled client architectures (coming from Multi-Screen/Responsive Design et.al.) I am trying to work out the different trends.

The currently most trending client architectures could be:
1) SOFEA (as in Flex, GWT, AngularJS, Backbone.js for instance)
2) ROCA (as in Play for instance)
3) CQRS (might be orthogonal, not sure, maybe Calatrava and Cocktails could be an example)

If I try to categorize them, ROCA stands out because it relies on "Unobtrusive JavaScript". So obviously it could be considered "Static" rather than "Dynamic" (i.e. the representation is fixed, might only be augmented for usability reasons). SOFEA and CQRS differ in how they handle parallel events. SOFEA stems from a more traditional GUI heritage, abstracting client/server away from the client, while CQRS makes the client very aware of parallel events by forcing it to react to events. That's why I would consider SOFEA close to ROCA, but have an independent representation which can only work locally without the server because they quite heavily rely on something coming back. I am not yet convinced the distinction is rock-solid as one could also argue ROCA is asynchronous because it allows for more versatility by enforcing hypertext.

Hence, a possible matrix could be:



Now, the lower right corner is native Android (not web-based so it does not really count) because Activites (Screens) themselves are usually rather static but rely on messaging and have almost no control about the task flow.

Glad about comments.

Sunday, 25 November 2012

Nexus and distributed computing


“Nexus is the name of the humanoid robots in Blade Runner. Taking this name discloses Google’s strategy: To turn Android into an indistinguishable part of our lives. Nexus are stronger and weaker at the same time, they are us on speed. A prosthesis or Surrogate (to bring another movie reference). Our technology systems will become a prosthesis, and as such not only ubiquitous and seamless but also versatile to changes. Ubiquitous and seamless we are working on, in the terms of affective computing and responsive design, but versatility is still a big issue. In a world changing faster and faster, the Achilles heel of contemporary technology is change.”

That’s the paragraph I forgot to mention in my W-Jax Keynote. This is going to be a post on what I meant with that example of the prosthesis. It would have continued on how Android is facilitating change, with message driven API’s and data update functionality; on how Google wants to introduce Web Intents and cloud providers start early concepts of application and data versioning. Let’s leave this aside for a while. Let’s talk about mobile apps and how we could start to embrace chance in them.

There should be no extremely complicated mobile apps from user perspective. Not due to platform or performance restrictions but context restrictions. What you want to use on mobile devices should be the happy case. Mobile Applications should allow deep-diving into information because "the value of interfaces today is the information it wants to present", even simple rules can get you anywhereIn Desktop interfaces with an exact mouse pointer, every page can contain multiple actions. With mobile and tablets, this actions have to be morphed into flows (which can be event- and gesture-triggered). That's why I think metaphors of urbanism are better suited to describe mobile application architecture then concepts derived from, in the end, military structures.

This flow is the link to the left out paragraph above. I am not a believer in Singularity, and I do not think we will end up in a Blade Runner world. In it's current state it's much more likely that the technology will be so resistant to change that we will have to learn to innovate completely without out. Luckily, some systems, like Android, start to introduce concepts of flows, messaging - generally, versatility. This can only be achieved by decoupling data representations from functions and human interfaces, for instance by leveraging CQRS, Mixed-Model-Mobile-Apps or ROCA. All of those three towers (I would draw them 90° turned to what we call tiers today because they would be accessible from everywhere) would be decoupled, freely linkable and versioned. The technology itself would become a truly representative tool. Just like the Nexus, but focused on what computers can do best: data and functions.

I imagine something like Erlang actors, OSGi or Server-Side-JavaScript (think Cocktails or Calatrava) to exchange business logic. A concept where data and logic can be exchanged seamlessly and across versions; a concept that could up to this point not make it into any statically typed programming platform. However there is a chance that Java (after JRebel, Jigsaw and Nashorn) could natively support versioned interfaces one day, maybe together with richer declarations for layers, tiers or towers. That fine day might be the brink of a new era of pervasive computing.

Sunday, 28 October 2012

Looking into the future

In preparation for my next talk I am researching Sci-Fi movies - since months. Just today I found a really nice article on "Cloud Atlas" on io9. Here they quote the makers of the film:
»but with the Wachowskis, they talk about the history of skyscrapers and how building materials have dictated the shifts in architectural language. From stone towers to metal and glass we pondered the next leap in architectural vocabulary- nano built structures, organic skins with solar cells etc.«
Architecture is always evolutionary as it is always based on a common understanding or a "shared hallucination" (via Ruth). Shaped by our use of materials and our society, architecture always represents a common vision of the future. That future neither has to be true for Sci-Fi movies nor for Architecture, but by deciding to implement this common vision, it allows us to agree on a model. And this model in turn can be questioned by anyone who would have not understood the details. This is why Sci-Fi movies are such a great example, they represent a pre-made iteration-1 prototype.

Sometimes, Architecture brings several disciplines together. As in Calatrava's case. That's why I like Martin Fowler's new Crossplatform framework of the same name. It allows an evolutionary approach, leveraging the power of Mixed-Model apps. Coming back to the materials, it allows us to take the fabric, the logic of an activity and make it truly technology independent - by actively choosing which technology is appropriate in which context.

Sunday, 30 September 2012

Eventual Consistency is a usability concept

When Google introduced Spanner, the big news was NewSQL: the availability of general purpose transactions. In last years we have seen movement in a different direction: Basically, that transactions are not needed in many cases and developers should handle them on a highler layer. Often, the main argument used was a concept coined Eventual Consistency. In short, this concept was said to be that consistency is sufficiently reached if (in a distributed environment) all data is consistent at some point in time. The actual point could depend highly on the implementation of the storage subsystem (the database).

In his original post on Eventual Consistency, Werner Vogels already had in mind a much broader notion:
Many times the application is capable of handling the eventual consistency guarantees [...] user-perceived consistency. In this scenario the inconsistency window needs to be smaller than the time expected for the customer to return for the next page load. 
Greg Young put this into context when he showed the relativly small impact of server-side consistency in comparison to stale data along other transport layers to the client. Hence, I believe the point in Werner Vogels' paper never was that consistency is irrelevant, or a database does not have to be consistent, rather the point was that, across system boundaries, consistency needs to be in sync with the user expectation.

Eventual Consistency is a usability concept.

Take a look at CQRS for instance, the translation of the concept into a broader architectural context. You need Eventual Consistency in order to satisfy the CAP theorem, yet this does not mean in any way that your database cannot be transaction-safe. On the other hand, you can build an eventually consistent system and UI on top of a perfectly transaction-safe database like Spanner. Responsive design often means asynchronous design, regardless of the services or storage you use. Consistency is not a requirement database engineers have to solve, it's a user experience requirement.

Sunday, 9 September 2012

Map vs. the Landscape

Architects think in maps. Unlike real-world building architects, though, most IT architects cannot just walk to the construction site and get a feeling of the environment. Plus, the time is shorter to cover the spatial dimension of the system. In a time, where high speed links become standard and our perception of the state of a system is like "a star that burned out 50.000 years ago" IT architects need to fly high to cover the distance, even quite literally if they need to coordinate worldwide distributed systems, development teams and clients.

The maps we currently have of systems are not capable of showing the right dimensions. We might even say that UML only covers the obvious parts, the IDE-supported parts, the fine-granular building blueprint instead of the actual, visionary architecture. Mapping functional, parallel, non-structured elements in UML is possible but loses any visually helpful information in the process.  If we look at the interesting systems, the large systems, the old proverb  "The most alluring part of a map is that which is unmappable" becomes true again.

20 years ago the Agile Manifest was the spearhead of a movement that banned documentation as chronically outdated. 20 years later, with emergent, fault-tolerant architectures we need to be able to look at the runtime state of a system, judging its functional and technical change over time, it's clients and connected systems, rather than arranging the building blocks upfront. Process Mining, Architecture Integrity Control (and adapt) and Continuous Refactoring become crucial. Evaluation becomes the key step of an architecture, not design. The architect becomes an explorer with an idea, rather than a supervisor with a plan.

How could we map this changing landscape?

And I expected it to be wonderful - it was.
I expected the world to be sad - and it was.
I just didn't expect it to be so big.
XKCD/1110 

Friday, 31 August 2012

Hoping to bring order and structure to the Bazaar

The Design of Design is one of my favorite books. In "A Generation Lost in the Bazaar" Poul-Henning Kamp is stating "It is a sad irony, indeed, that those who most need to read it may find The Design of Design entirely incomprehensible", where "those" refers to programmers who have "never seen a cathedral" (as in a beautiful architecture). His example of a cathedral is (the original idea of) Unix, whereas BSD is the bazaar.

Needless to say I don't fully agree with the idea of the perfect upfront design for systems of a certain size. But that's not important. Funnily I might be considered one of "those" - yet I like The Design of Design specificly because it talks about building a house and thinking about it iteratively. Furthermore, I am not a big fan of the building architecture metaphor. I don't think a black and white thinking is helpful, most cathedrals might turn out as a tower of Babel and others, like Unix, might be a pretty small shrine with 4200 LoC.

Poul-Henning claims "quality happens only if somebody has the responsibility for it" which is apparently based on a quote from the book "the boldest design decisions, whoever made them, have accounted for much of the goodness of the outcome". The question whether democracy/crowd/agile is better suited for certain problems than aristocracy/delphi-method/waterfall will never be solved, there is no true or false in this question. I guess I can agree with Poul-Henning on the responsibility as much as I can agree with Brooks on iterative design and still consider cathedrals harmful. The discussion always reminds me of the "first follower" meme - the key is not to have a central instance sanctioning "old festering hacks" (one comment to the post stated Linux as an example for that - very well done), the key is to avoid these hacks before they happen by allowing communication and embracing change.

My point is: Who said someone needs to be responsible for the whole system? With modern programming techniques programmers can be responsible for smaller parts and bring beauty to them. With modern collaboration techniques we can share thoughts about others' architectures and avoid a mix of, say, Perl and Python. In fact, almost all cathedrals of the middle ages were built iteratively (and even some modern ones). Sometimes visionary architects come up with something like Sydney Opera House and someone else finishes it. Ideas make leaders, not prohibition.

Sunday, 26 August 2012

On civil engineering

Jonathan pointed me to Nick's excellent post why the analogy between civil enginneering and enterprise architecture is flawed.

It's just that I don't agree with his conclusion that "civil engineering, in practice, has not made the leap to layered cities", which implies that layered cities are desirable. The metabolists had them, Archigram experimented with them (The Plugin City), most imporantly Yona Friedman. They were not implemented because of cost (see Dubai) or because they were to complicated (see Beijing), they were not implemented because they would require either a common agreement on planning and design, or alternatively a single building pattern. Both are almost impossible to reach in a free world, where the market decides and risk control is a central element of planning. Betch Andres-Beck (another Theater student, quoted by Kent Beck) points out "It's extremely difficual to reverse progress in the construction world [...] asymetry of costs shapes the activities". In a cost-driven world, risk is key. In civil engineering this means decisions are literally fixed as the risk is on CAPEX - in software enginneering this means adaptability as the risk is on OPEX.

Geoff says it better than I could in his take on Archigram:
Architecture can reshape how we inhabit continents, the planet, and the solar system at large. Whether or not you even want inflatable attics, flying carpets, and underwater eel farms, the overwhelming impulse here is that if you don't like the world you've been dropped into, then you should build the one you want.

Saturday, 18 August 2012

Structures can become shackles


Das Magazin featured some interviews about time. In one, a director said "The Joker is like Spinoza".

For me, Batman always was about Existentialism vs. Nihilism, basically a fight between brothers. (Mabye my visual preference tends to see everything as origami cosmos) Seeing it as an example of Relativism vs. Universalism or even Rationalism vs. Absolutism provokes interesting thoughts about evolution of systems and how (business, moral, status) values influence their development.

"Structures can become shackles" is a key quote of the movie as Philosofreaky points out. Gotham is a allegory for a system which can never be right. The paradox which the Joker poses on Batman is, that it's not actually peace what people want, it's being left alone. Batman can fight the Joker, but he can never fight the anarchy, with every hit against the Joker he is proving him right. Batman's solution is to keep Gotham oscillating between tradeoffs. As usual, 3quarsdaily features an excellent essay by Ajay Chaudhary about these tradeoffs, posing the central question:

Can a legal order exist without sovereign authority?

Here we are at emergence.

I don't agree with the articles conclusion that The Dark Knight Rises poses fascism as solution to interpassivity and we "dream in fascism". We might sometimes dream in the aesthetics of fascism, in Beijing-Olympics style human patterns, and we might sometimes wish for a strong sovereign, but we certainly never wish to have no choice. As stated before, choice architecture can only work in a system that you control. In a global world, I believe there is no such absolutely controllable system. The Joker is just proving that all a mass needs to be uncontrollable is a little spin. The danger is not the Joker, it's the mass. In the context of this blog this means I do not agree that a software architect will always lead a system in a better direction, nor that patterns, nudge and control will cause a system to comply. Dogma is never good, may it be in the name of Science, Macciavellism or Beauty. All of them are usual tools but only if the dynamics of the systems lead to emergent compliance.

So - is the only solution pure anarchy? Can we argue it's only purely rational if we develop systems like the Joker (or V as Anonymous symbol) would do?

There is a major difference between "just human" systems and engineered/architected systems like the ones we work with: If we want or not, they will always be engineered, designed, architected. But, as these systems evolve much quicker than real-world ones, we learn faster: "we know we can break things, but how can we make things better?". Modern systems learn so quick, they don't have to be anarchist or fascist, they can be libertarian. These systems don't need a Cesar where developers hide under a "ruthless and possibly insane warlord", they can embrace democracy, they can adapt to new truths especially because they are engineered by the means of adaption. Like a fire brigade the can form a team under a leader if the risk is high and visible, and form a swarm if it's more important to identify risk by covering the greatest possible horizon. We don't need a Batman because there is no Joker.

Monday, 13 August 2012

A New System of the World for IT

As usual, Todd Hoff nails it, although I don't share his views on PaaS and "open" source completely:


An interesting architecture evolution we are seeing in the cloud is how systems continually reorganize themselves to give components better access to information flows. This allows services to be isolated yet still have access to all the information they need to carry out their specialized function. Before firehose style architectures the easiest path was to create monolithic applications because information was accessible only in one place. Now that information can flow freely and reliably between services, much more sophisticated architectures are possible.
He also talks about Cell Architectures, shame he doesn't link the two, like Meta-Agents.

Saturday, 11 August 2012

Rhizomic Timeline

Tumble Tree is the most needed browser plugin ever. Since ages I am looking of a way a browser should me a cross-tab/cross-window long-term history. Imagine this would work for your systems architecture, with all the interfaces and processes. How they emerge, how they fade, how they converge, the whole rhizome - ideally with a time shift feature so you can also plan ahead.

Saturday, 4 August 2012

An architecture can emerge.


Recently I had the joy of stumbling over an interesting discussion regarding architectural emergence. It started with a blog post by Igor. A nice and intriguing read that's basically saying that, although every system has an architecture and therefore emerging architectures are possible, they will be of limited complexity due to lack of evolutionary fitness. I like the idea of evolution and that Igor admits emergent architectures are possible. The original post lists a main key constraint to the quality of emergent architectures, though: „Developers do not anticipate upcoming requirements“. Furthermore, it is stated that “natural selection doesn’t think at all”.
Coming from this “reset to zero” (with local changes) idea the author argues that emergent architectures tend to be in higher technical debt than designed ones. Technical debt is not inherently a problem but can influence the system adaptability negatively in a way that it limits possible complexity.

I tend to disagree.

Firstly, I believe there is no lack of vision within development teams. Usually developers know what kind of product (from a high level perspective) is expected as a “goal”.
Secondly, I do not think that evolution is purely random, this argument would leave out evolutionary processes such as co-operative behavior and parent–offspring communication.  In fact, the fittest species are the ones who communicate most efficiently (Hi internet!). Hence I think that, though technical debt might be temporarily higher, communication will usually lead to quick changes. Nowadays we have the possibility to change technology layers or tiers by the means of refactoring and abstraction. In the EJB example of the original post, I don’t see why an upfront design decision to use EJB 2.1 with a change request to EJB 3 would be different from the team realizing that EJB 3 is the way to go and incrementally changing implementation (e.g. by assigning a team to refactoring). The more changes, the more YAGNI.

To some extent, these issues have been addressed by Alex. He is adding Dual Inheritance (Alex knows the terminology much better so forgive me any formal mistakes, please) theory to the formula, adding a bit of pessimism towards the outcome of the mentioned communication. In real life, debt would most likely not be detected and eliminated; it would be covered and not even sanctioned. This is a pretty good point, yet for some reason Alex limits it to myopic agile processes, which weakens it a bit.

Nevertheless, I still disagree.

Agile itself does not mean iterations are industrial or in any way myopic. The mentioned bias towards success is actually a bias towards “not failing”. Innovation can only take place if graceful failure is embraced and considered success, whereas procrastination is considered true failure. In Agile development, not failing can mean multiple things. In most of the cases, a developer (being an engineer) will realize that a technology might lead to not reaching the Sprint or Product goal and suggest adjustments accordingly. In my experience it’s not the developers being risk-averse but the management who would not accept not meeting milestones even if this would mean a better end product would be shipped.

Agile requires feedback: Between the users, product management, operations (read DevOps), project leads and the development team. This is also discussed by Alex in his following post. I very much agree with Alex that lack of feedback is one of the key reasons projects fail. I do, however, disagree that direct feedback stays myopic and does not change the ecosystem or culture around it (which somehow implies that no feedback is no problem). To stay with the metaphor of Evolution, this would ignore changes during the development of an organism, during childhood or via environmental changes.

My arguably optimistic assumption is that every developer wants to have success and deliver a good product. Given that he or she receives sufficient feedback, the developer will strive towards a “good architecture”. Because they want good feedback, interesting projects, reference letters, svn praise and all that in a flat world. In the democratic process which is happening between the developers, and between the teams (it’s good practice to have multiple teams, rather than having “one development team/stream for each system”), questions will arise and possible problems addressed. If you have, however, one architect or project manager designing and making decisions, he or she might indeed act as Alex pointed out: not acceptingchange eventually leading the project to grief, frustration and death. In the worst case, by turning this into a hindsight bias “problem of the developers” and trying an even tougher policy next time. 

In building architecture, an architect is someone who has an overall idea, mainly communicating, motivating and facilitating process change with the end goal in mind. No architect in the real world will not consider experts and listen to them every single day unless the keys are handed over. The beauty of evolution is that it has found to many ways to overcome its own randomness. As adaptability is actual fitness, evolution has found ways to incorporate feedback probably better than we humans do in everday life. A good architect is a feedback-broker who fosters evolutionary processes. Because he or she knows one day the actual world (as in society change, earthquakes or subprime crisis) will hit him hard.

I believe that software architects are still necessary. Instead of having "master architect" attitude, they should have a very similar role to building architects. Sticking to a major plan in one person’s head with the argument that emergence will lead to worse results is the dark side of choice management; it’s giving up adaptability in favor of predictability of failure. Accepting emergence, though, is the flipside. Channeling ideas coming from emergence into architectural decisions, embracing change coming from democracy within the development team and keeping all options open will eventually lead to even higher adaptability, more complex systems and – just by the way – a better life as evolved human beings.



Wednesday, 25 July 2012

On Mixed Model Mobile Apps

Charlie Kindel recently published "Apps must be cross-platform", an article I mostly agree with. Would only add Cross-Device and a grain of pervasiveness. My 2009 thesis supported the same argument, that cross-platform should neither be agnostic (like HTML5 or Java ME) nor purely generative, it should abstract business logic but embrace user experience. Therefore, I decided to finally publish some of the content (1,2) in concise form (Iteratively, as I am not allowed to publish the thesis itself), looking back.

In Architecture without ArchitectsBernard Rudofsky claimed that architects should learn from premodern architectural forms. At IT architects we should learn from our users, embrace user experience. Apply "ilities" not general, but where they belong. Performance and Usability at the presentation tier. Reuse, scalability, maintainability and requirements traceability in the business tier.



Art made by walking. Source: Richard Long