Monday, 21 October 2013

The application server is dead

Jonathan Harclerode and me gave the W-Jax 2012 Keynote, and have been giving it a few times since. Now, one year later, I'll write down the transcript in it's most recent version (for OpenSlava).


The application server is dead. Applications running on it feel like legacy, they are legacy, we are coding legacy. We all know it.

So, the question is not if it's dead. The application server is a zombie, if at all. So, if we know it's dead, and that's out of question, we have to ask: Who killed it?

Did this guy killed it? No. He could have, but he probably thought it was too boring so he mad it just disappear. He just degraded it to an app server.

So maybe these guys killed it? But no, they don't care about app servers. They decided that computers should be everywhere. They cut the word "server" from app server.

Maybe these guys? No, they are too cute to kill anything. They believe an open world, so they removed it's security, made its data available and taught us that sharing is power.

Then maybe this guy? This guy killed something, he killed transactions, the killed entities. He turned the way be handle systems upside down, stream-based and asynchronous. But he didn't kill the application server.

Well, this only leaves this guy. Coined the evil genius he could have probably killed the application server. He just put them in cages like chicken.

But no, he did not kill the application server either.

So, who killed the application server?

This guy did. Well I don't mean Arnold Schwarzenegger in person. I mean the Terminator, Skynet. Our vision of the future as defined by Hollywood killed the application server. In the next half an hour I want to walk you through this vision of the future created by Hollywood, show how this defines the state of computing and why the application server plays no role in it.

Maybe we should say first what we mean when we speak of an application server. We don't mean a Java EE standard, or a product like Websphere. We mean a black box that makes you, as programmers, lazy. A black box that abstracts away the world from you, that promises to hide the real world from you, providing a safe cushioned environment. When we speak of an application server it can be a fully blown Weblogic, a Spring lightweight container or any vendor-specific PaaS.

An application server provided many services to us. We want to split this talk into 4 separate parts, the 4 main features of application servers in our definition.
First, context. You can call it session management, dependency management, boundary or simply the situation your app finds yourself.
Second, coherence. Call it transactions, state management or persistence, maybe even flow control. It's the big picture.
Third, trust. Ranging from security up to connection safety, identity and access management, it's what your application trusts to be trustworthy.
Forth, management of all the above. Simply dumping all these resources does not make sense unless they are efficiently management, monitored and allow collaboration and scalability.

1. Chapter: Context

Let's begin our first chapter with context: In the old world everything was easy. You could work on something, go to lunch, come back and continue working. The mainframe was caring for our context. Then server-client architectures came along, and with them connection-bound session management. Now computing was cumbersome: We came back from lunch, and as soon as we hit the "save" button our session had been reset, the data lost and all we saw was a login screen. But the worst part is: Users learned to accept this constant annoyance. That's how we, as programmers, became the geeks, fixing something that looked broken to everybody else.

In most Hollywood movies, it's the geeks who manage and alter the digital context [Example: Jurassic Park]. They control the multitude of systems, hit all the blinking buttons on various monitors in order to create magic. But is that 80ies vision really how we want to see ourselves? Most of us would like 007 technology, jetpacks with attached Martini dispensers! We don't want to constantly manage and alter digital context ourselves, we want a smart machine that solves problems instead of creating ones! So we call Big Data and Mobile devices for help. But this idea is old. The tablet computer with Wikipedia on it, then called Memex, has been invented in 1945. This movie [Example: Forbidden Planet] shows an early version of it - in 1956. That's the year my mother is born! No, slick tablets are not a long-term vision, they are only means to the end of true dynamic, pervasive computing. This end is what's interesting, call it contextual, affective or reactive (responsive) computing.

Back to the application server. How would an application server manage such a context? Let's look at Star Trek [Example: Star Trek TNG], clearly one of the most futurist movies. On the NCC-1701 they can create black tea out of nothing but none of the interfaces is dynamic. I mean they need to run through the whole ship to find the one interface to battle the enemies? It gives a great story I admit, people running are always more interesting than a guy with a tablet in his hands, no? If the interfaces are boring - what's the vision? Something we cannot see: The "computer" who knows everything. They can talk to it, it's fully aware of the context, understands the situation without misinterpretation. This all-knowing computer is a reoccurring theme in movies, from 2001's HAL to the Matrix or Portal's GLaDOS. That's something we need to build today. Maybe you already know it from massive multiplayer games like World of Warcraft.

These games are interesting because their context is not connection or device bound. You can start a game at home in evening, sleep a little, continue in the morning, realize it's already 8am, run into the office - and log in again. The game will maintain a separate, artificial context which is eternal and ubiquitous. It can materialize any time or location you want it to, the server manages the context rather than just accessing it.

An interesting applied example of that is Google's "Chrome Super Sync Sports". It uses two devices that access the same application, both only through a browser. One acts as a controller, the other one as a display, just how you want it, wired by HTML 5 technologies.

What we see coming are not only "Multi-Screen-Applications" in a "Nexus of Forces" which can be accessed in different, responsive ways. Context and Information Architecture need to be tied, we need to begin thinking in cross-channel activities. We see applications turning into products that work now only across technological channels but converge into the real world as services. Then, the next step in progressive enhancement of IT will be service design.

2. Chapter: Coherence

When we speak of context we usually imply coherence. Because we want a context to be consistent, reliable, with a defined state, preferably with a perceivable transaction which lead to this state. We prefer coherence, even if it's wrong. Mainframe are really good in managing this, so it was no surprise transaction management also became a strong part of application servers.

Unfortunately, these perceived transactions make our life hard. It's the situation when the CIO approaches you, asking why our applications can't have the same universal kind of coherent context she has on Facebook. It's the situation when you have to tell your CIO that Facebook lies to her. They just trick her into thinking this context is coherent. They call it eventual consistency, and perception of context is an integral part of it.

Let's look into one of the best examples for this Space Balls [Example: Space Balls]. In Space Balls, the evil empire is travelling through the universe in search for the princess. To find someone in the vast universe is tricky so they turn to their magic tape recorder, Mr. Rental. It has the movie "Space Balls" in its library, so they simply forward to the scene where the princess is shown. However, during forwarding they arrive at the exact scene which is the very moment in the movie - "Now now". Here we see the classic paradox of time travel. It's not clear whether in this very moment the past is altered, the future rewritten, if the current is new or old. But to Lord Helmet, it's consistent. That's the only important thing - his space ship context is consistent. Something we still need to make possible in our applications. Stateless - but context-aware. Eventually consistent - but responsive. Unambiguous  - but resilient to changes.

One movie that deals with this paradox in an interesting way is actually a book - "Hitchhikers guide to the galaxy" [Example: Hitchhiker's Guide]. Again we are on a space ship and want to cross the limitations of time and place. They use the "Infinite Improbability Drive" to overcame these limitations. In contrast to crude ideas like Warp it is based on Quantum Physics, particularly the axiom that any particle only has a probability of being in one place. Which, in turn, means it can be anywhere with a low probability. The Improbability drive leverages this and sends you anywhere, until you hit the right spot. You might hit it turned into a sofa - but don't worry, after a few seconds you will be, eventually, a consistent human being. And that's the important. Now - how do I pull off a real-world example from this?

This quote is from Wired Magazine. It's basically saying that humans can't play a role in financial trading anymore. Transaction speeds have become so quick that, by the time the quote of a paper has been read and processed by your brain in it's 700ms reaction time, it's worthless. Hundreds of other transactions could have changed the price dramatically. What's the worth of consistency if humans can't perceive it anyways?

We need to accommodate to hyper-consistency. Our applications need to learn to manage consistency in a way that humans can perceive it. Not as a bad excuse for some lame NoSQL database but as a rule of life. Inconsistency has to become an integral part of our application architecture. That's what David Gelernter meant when he opted for "designed information flows".

An interesting technology example is Google's Spanner database, part of it's F1 system which is building the back-bone of it's new search infrastructure. You might have heard that Google is not using Map/Reduce as much as they used to, because it's a relatively slow batch process. F1 is a very efficient, world-wide-distributed database which is is quick enough to reach an uncomfortable barrier: Speed of light. Thus, F1 embraces the fact that inconsistency cannot be avoided. So it returns a probability of the information consistency - and lets you query using an expected probability for consistency. The most important take-away here is that in the end you, or the service Google built for you, can define your level of consistency. Translated, this means your business layer has to decide for a level of consistency - not your application server.

3. Chapter: Trust

So we live in this brave new world of distributed real-time systems where context is universal and computers decide what we should perceive. Isn't that a little bit scary?

Application servers should provide safety within the borders they assign our environment. Let's look how another movie solves the paradox of safety vs. privacy.

[Example: Minority Report]. Minory Report is again and again used as an example of excellent future prediction. Released more than 10 years ago, some of its concepts are reality already. For those who don't know it, it's about predictive analysis to prevent crimes before they happen. This kind of large-scale analysis is currently being implemented by the NSA - but let's not get into this. First and foremost, Minority Report is a great example of 3D user interfaces, fancy multi-touch and adaptive computing. But that's already reality. Elon Musk's SpaceX uses a cheap Leap Motion Sensor to achieve a similar effect. Minority Report is also often quotes as an example for personalized advertising, a big data world which constantly tracks this desires. Well, New Songdo has something similar and the Quantified Self movement goes even beyond that, expect to live in a future of "Pay per Gaze" with Google Glass soon.

But one this is often overlooked in Minority Report, maybe because it's such an important part of the plot: Constant identification. Everyone is permanently authenticated by eye detection. The moment the protagonist walks into a fashion store after having his eyes replaced he is greeted with the question whether his last purchase was satisfactory. Interestingly - this does not work today. Yes, there are federated provisioning protocols - but there is no one to trust. You still can't log in to Google with your Facebook account or vice versa. NFC is still waiting for its break through due to competing systems, and banks are challenged by payment systems like Square and PayPal.

There is a new field emerging around trust and it's coming from advertising. Twitter uses its reputation and social network self-correction to become the preferred identity provider with MoPub, its latest acquisition. Apple arguably goes in a similar direction with Touch ID and iBeacons. Note ID here means identification so don't mix it up with passwords.

Trust is always a personal thing, a decision based on heuristics. Our applications will need to become aware of whom we trust, and trust will need to be given by the user who has to learn to manage it. Mozilla's Persona will be one option out of many, we have to give those options instead of delegating it to application server infrastructure.

4. Chapter: Management

In the end it's always the same question: Do we trust central control, like Apple, or do we rely on distributed systems like Mozilla? Who is watching the watchmen? Let's take a look how we could reconcile context, coherence and trust into a working system.

Tron's [Example: Tron] option was to become one with computer. This is called singularity. The problem with this approach, if we look at Hollywood movies, is, that most computers are more clever than we are. At some point, if you look at War Games [Example: WarGames], Terminator or the Matrix the clever computers come to a very simple solution: Eliminate mankind. I hope we agree that's not what we should design applications for.

But we don't need to look that far in the future. A centrally managed system can also exist in simpler ways, like in the Truman Show [Example: Truman Show], where the protagonist is made to think the world he is living in is perfect. Unfortunately its managed by a huge mixed man-computer machinery, you could call it a benevolent dictator. His life is a TV show. In the end he breaks the borders of his world because the system fails to predict his next step - the Achilles heal of every centrally managed system.

The other option to management are distributed, emergent systems. You know, as in agile. Rather than having someone efficiently, centrally, managing resources, trust and environment, we rely on natural social dynamics. The problem with systems like these is: They hardly reach an optimum. This is the dystopian world of Blade Runner [Example: Blade Runner]. Machines have become almost indistinguishable from humans. They hunt each other, no one has control, the world is dirty and filthy. It's an almost cyber-punk world with a mixture between cyborg technology and crappy mechanics. It's the most likely outcome of most the legacy systems we today try to get rid of. Management is impossible, there is local optimization but cyborgs and humans alike fight the same anxieties.

The Japanese Ghost in the Shell [Example: Ghost in the Shell] goes even further. Humans and machines are equal, fighting other humans and other machines side-by-side. The biggest difference in these kinds of movies is that technology has become a commodity, it just works. Even sarcastic parodies like the 5th Element [Example: 5th Element] show a technology that has become like plumbing, an infrastructure - flying cars don't crash with a blue screen. It has become self-healing, self-optimizing, even self-creating - but also self-destructing and self-critical.

Bringing evolution into our systems and by making them stronger through failure is called anti-fragility. Software architectures are currently created, that show these characteristics. Take for instance ROCA or CQRS which are sustainable to changes, easy to maintain and understand - but maybe harder to manage and monitor. Or take Behavior-Driven Development and Testing which puts the process in the center. In infrastructure, Cell Architecture and Software-Defined-Networking allow new, emergent structures.

All these developments have one thing in common: They embrace the flow, the change, the failure. In turn this means, they don't need an application server to manage a cushioned environment for them.

5. End

Does anybody of you has an Android phone? Anyone knows where Nexus comes from?

Say Hi to Nexus [Showing slide of Blade Runner]. What you see here is the Nexus 7, the most human-alike technology in Blade Runner, effectively indistinguishable. Google is already pushing our world towards singularity, that's why they call their devices Nexus. Our task will be to avoid the centralism that comes with it. We have to stay visionary, go down different routes, explore. We have to learn not to focus solely on technology, but see the whole. In the past, IT engineers brought knowledge from a wide array of areas, nowadays it often seems as framework knowledge is more important than conceptual knowledge. But a bunch of rock-star and ninja coders can't help you in making the right decisions.

In the end, the application server won't die. It's context capabilities - they will become ubiquitous, distributed, out of its hands. It's coherence capabilities - they will move into business logic, becoming a conceptual model. It's trust and security capabilities - they will be replaced by a  network of trust based around identity and active defense. So what does this leave? The management, the arrangement of resources. The decisions we have to take. In then end the application server will not be killed - it will simply disappear. And one day we might ask each other if it's still there.

We must not become lazy in creating new systems, in going new ways rather than resting on framework cushions. That's going to be the task of our generation of technologists.



2 comments:

Jan said...

Zizek on Ideology in Movies http://www.youtube.com/watch?v=5VAKKnBOaKE

Jan said...

Apparently just about the same time we held this talk for the first time the "Make it so" Blog was started: http://scifiinterfaces.wordpress.com/