Sunday, 31 December 2017

Mandala or The end of control

A good friend of mine asked me why I don’t blog anymore, so I took my new-years flight as an opportunity to write some random thoughts down. Happy new year!

We used to build Information Systems or Control Systems. Sometimes, they were clumsily merging - but finally become something entirely new: Intelligent Intent Systems. I don’t like catchy slang, though, let’s just say we finally have universal “Systems”.

A sufficiently lean online shop is essentially an easy interface for sending a signal into an extremely complex, often entirely automated, logistics chain that translates information into physical control commands - the Information System part manages the feedback loop to humans. The more feedback, an idea from Control Systems, became incorporated into Information Systems, the smarter, faster, and more intuitive we were able to interact. Like frames in a movie, we’re now at the point where it becomes seamless and continuous. The IoT, spatial computing, but actually just technology becoming synonymous with information close the loop back into our world (the "real" world). We used to interpret our systems as essentially closed and deterministic, in both imperative and functional programming styles. But the new types of systems have rapidly become Probabilistic Systems.


With the planet-scale cloud of distributed services risk-driven models, from complexity and uncertainty theory, have taken center stage. With Machine Learning rapidly surpassing human experience in programming and research, we will ourselves have to model the real, human, world in a descriptiveprobabilistic way as part of those systems, observing and inferring, rather than imperatively defining the message flows between agents, and its consistency properties, the data flows between processors or structural limitations (think column- vs row-oriented data).

We don’t observe outside of the system, but as a part of it. Much like quantum physics, psychology or sociology (especially of power), told us. We humans are only agents receiving signals in this system. We inhabit a second-order Cybernetics Technosphere we call reality, built on platforms that define what they want and value economically in entirely new, sometimes alien ways, like real-time exchanges including spatial and social dimensions - with the gig- and experience economy only being the tip of the iceberg. It’s not a matrix or a mastermind though, it’s just a real, techno-human ecosystem with its own, uncontrollable evolutionary goals.

Despite me not liking the gig economy, I like the idea of evolutionary systems. Most writing focuses on the iterative process to avoid unpredictability, though, assuming some external, given, linearity, to incorporate feedback. That’s important for organizations, but it’s not mentioned where the feedback comes from: The goal, the adversary, the predator, or the local maximum.


I mainly work on the integration between real world and software, the hard end of mobile / ubiquitous / spatial / pervasive computing. Building independent, distributed service meshes, in a DevOps and Design Thinking (or DesignOps, whatever the cool kids call it these days) way. In such systems, you’re always going after the local maximum, the goal is unclear, more often than not multiple conflicting ones. You’re naturally dealing with (domain) verticals rather than horizontals. Every day has a new trade off between best experience possible and the technical realities. Those systems are not linear evolutions, they are mandalas of expanding and contracting system boundaries. They have to be observable, though, as with observation comes empathy, and with empathy learning.

In the future, we may refactor the parameters of our systems based on deeper insights about their non-deterministic behaviour. That's what I like about SRE-style work. It's the non-deterministic, the probabilistic part of software engineering. It focuses on observability and serviceability, and ML RCA’s involve explainability - correctness becomes an optimization goal, not an axiom. When I spoke about Spanner first time publicly in 2012, compared it to the twisted experience of time in movies like Spaceballs and The Hitchhiker's Guide to the Galaxy. The powerful takeaway is: Nothing is fixed, if we can reason about it, we can change it.

To understand the magnitude of change to our profession, we have to understand the societal context. Of all the possible futures, a dystopian scenario is interesting here - I shorten my version after watching Charly Stross’ talk from 34c3 which tells it better: In this scenario, the we is not a harmonic, transhuman, unity. The new we is us and algorithms from us, for us. A dark (in the sense of dark matter) singularity not of eternal life but thoughtlessmutual uncertainty, where biased algorithms and biased, dumbed down or even corrupt, people push each other further into the edges, not becoming market segments but mobs which reinforce themselves. The algorithm is as helpless as their users, because the society and economy around it require the entropy as fuel making regulation impossible.

Quick, personalized, adjustment, unlearning, or one-shot learning, can maybe avoid this scenario - it seems AI is already forgetting easier than us, controlling and optimizing itself faster than us, collaborating and sharing surprising insights nicer than us. In a "thinking fast, thinking slow" model, maybe the human is ought to become the "slow" part - as Nate Silver once put it "complementary roles that computer processing speed and human ingenuity can play in prediction". Instead of turning away, the human needs to be enabled to act.


Right now, we train machine learning systems by saying “yes” - soon we will reach the point where transfer is so good that we’ll start saying “no”. That “no” has to be slow enough, it has to be thoughtful, and it has to weigh more than assumed silent consent. We have to introduce reason, empathy and ethics, not only into individual machine learning models, but into the whole system that is driven by technology, into all of the information and the human organizational complex around it.

What excites me about working with systems from an observability, serviceability, and explainability perspective is that we can bring all of that rich knowledge from physics, psychology or sociology, hermeneutics, but also art in, and start reasoning about the overall behaviour, rather than deterministic, imperative requirements. We only have to keep talking to each other - and try to understand.