Friday, June 01, 2018

Thinking about Software Architecture & Design : Part 11


It is said that there are really only seven basic storylines and that all stories can either fit inside them or be decomposed into some combination of the basic seven. There is the rags-to-riches story. The voyage and return story. The overcoming the monster story...and so on.

I suspect that something similar applies to Software Architecture & Design. When I was a much younger practitioner in this field, I remember a very active field with new methodologies/paradigms coming along on a regular basis. Thinkers such as Yourdon, de Marco, Jackson, Booch, Hoare, Dijkstra, Hohpe distilled the essence of most of the core architecture patterns we know of today.

In more recent years, attention appears to have moved away from the discovery/creation of new architecture patterns and architecture methodologies towards concerns closer to the construction aspects of software. There is an increasing emphasis on two way flows in the creation of architectures– or perhaps circular flows would be a better description. i.e. iterating backwards from, for example user stories, to the abstractions required to support the user stories. Then perhaps a forward iteration refactoring the abstractions to get coverage of the required user stories with less “moving parts” as discussed before.

There has also been a marked trend towards embracing the volatility of the IT landscape in the form of proceeding to software build phases with “good enough” architectures and the conscious decision to factor-in the possibility of needing complete architecture re-writes in ever short time spans.

I suspect this is an area where real world physical architecture and software architecture fundamentally differ and the analogy breaks down. In the physical world, once the location of the highway is laid down and construction begins, a cascade of difficult-to-reverse events starts to occur in parallel with the construction of the highway. Housing estates and commercial areas pop up close to the highway. Urban infrastructure plans – perhaps looking decades into the future – are created predicated on the route of the highway and so on.

In software, there are often similar amount of knock-on effects to architecture changes but when these items are themselves primarily software, rearranging everything based on a architecture is more manageable. Still likely a significant challenge, but more doable because software is, well “softer” than real world concrete, bricks and mortar.

My overall sense of where software architecture is today is that it revolves around the question : “how can we make it easier to fundamentally change the architecture in the future?” The fierce competitive landscape for software has combined with cloud computing to fuel this burning question.

Creating software solutions with very short (i.e. weeks) time horizons before they change again is now possible and increasingly commonplace. The concept of version number is becoming obsolete. Today's software solution may or may not be the same as the one you interacted with yesterday and it may, in fact, be based on an utterly different architecture under the hood than it was yesterday. Modern communications infrastructure, OS/device app stores, auto-updating applications, thin clients...all combine to create a very fluid environment for modern day software architectures to work in.

Are there new software patterns still emerging since the days of data flow and ER diagrams and OOAD? Are we just re-combining the seven basic architectures in a new meta-architecture which is concerned with architecture change rather than architecture itself? Sometimes I think so.

I also find myself wondering where we go next if that is the case. I can see one possible end point for this. An end-point which I find tantalizing and surprising in equal measure. My training in software architecture – the formal parts and the decades of informal training since then – have been based on the idea that the fundamental job of the software architect is to create a digital model – a white box – of some part of the real world, such that the model meets a set of expectations in terms of its interaction with its users (which may, be other digital models).

In modern day computing, this idea of the white box has an emerging alternative which I think of as the black box. If a machine could somehow be instructed to create the model that goes inside the box – based purely on an expression of its required interactions with the rest of the world – then you basically have the only architecture you will ever need for creating what goes into these boxes. The architecture that makes all the other architectures unnecessary if you like.

How could such a thing be constructed? A machine learning approach, based on lots and lots of input/output data? A quantum computing approach which tries an infinity of possible Turing machine configurations, all in parallel? Even if this is not possible today, could it be possible in the near future? Would the fact that boxes constructed this way would be necessarily black – beyond human comprehension at the control flow level – be a problem? Would the fact that we can never formally prove the behavior of the box be a problem? Perhaps not as much as might be initially thought, given the known limitations of formal proof methods for traditionally constructed systems. After all, we cannot even tell if a process will halt, regardless of how much access we have to its internal logic. Also, society seems to be in the process of inuring itself to the unexplainability of machine learning – that genie is already out of the bottle. I have written elsewhere (in the "what is law?" series - https://replicasbagss.com/2017/07/what-is-law-part-15.html) that we have the same “black box” problem with human decision making anyway).

To get to such a world, we would need much better mechanism for formal specification. Perhaps the next generation of software architects will be focused on patterns for expressing the desired behavior of the box, not models for how the behavior itself can be achieved. A very knotty problem indeed but, if it can be achieved, radical re-arrangements of systems in the future could start and effective stop with updating the black box specification with no traditional analysis/design/ construct/test/deploy cycle at all.

Monday, May 28, 2018

Thinking about Software Architecture & Design : Part 10


Once the nouns and verbs I need in my architecture start to solidify, I look at organizing them across multiple dimensions. I tend to think of the noun/verb organization exercise in the physical terms of surface area and moving parts. By "surface area" I mean minimizing the sheer size of the model. I freely admin that page count is a crude-sounding measure for a software architecture, but I have found over the years that the total size of the document required to adequately explain the architecture is an excellent proxy for its total cost of ownership.

It is vital, for a good representation of a software architecture, that both the data side and the computation side are covered. I have seen many architectures where the data side is covered well but the computation side has many gaps. This is the infamous “and then magic happens” part of the software architecture world. It is most commonly seen when there is too much use of convenient real world analogies. i.e. thematic modules that just snap together like jigsaw/lego pieces, data layers that sit perfectly on top of each other like layers of a cake, objects that nest perfectly inside other objects like Russian Dolls etc.

When I have a document that I feel adequately reflects both the noun and the verb side of the architecture, I employ a variety of techniques to minimize its overall size. On the noun side, I can create type hierarchies to explore how nouns can be considered special cases of other nouns. I can create relational de-compositions to explore how partial nouns can be shared by other nouns. I will typically “jump levels” when I am doing this. i.e. I will switch between thinking of the nouns in purely abstract terms (“what is a widget really” to thinking about them in physical terms: “how best to create/read/update/delete widgets?”). I think of it as working downwards towards implementation an upwards towards abstraction at the same time. It is head hurting at times, but in my experience produces better practical results that the simpler step-wise refinement approach of moving incrementally downwards from abstraction to concrete implementation.

On the verb side, I tend to focus on the classic engineering concept of "moving parts". Just as in the physical world, it has been my experience that the smaller the number of independent moving parts in an architecture, the better. Giving a lot of thought to opportunities to reduce the total number of verbs required pays handsome dividends. I think of it in terms of combinatorics. What are the fundamental operators I need from which, all the other operators can be created by combinations of the fundamental operators? Getting to this set of fundamental operators is almost like finding the architecture inside the architecture.

I also think of verbs in terms of complexity generators. Here I am using the word “complexity” in the mathematical sense. Complexity is not a fundamentally bad thing! I would argue that all system behavior has a certain amount of complexity. The trick with complexity is to find ways to create the amount required but in a way that allows you to be in control of it. The compounding of verbs is the workhorse for complexity generation. I think of data as a resource that undergoes transformation over time. Most computation – even the simplest assignment of the value Y to be the value Y + 1 has an implicit time dimension. Assuming Y is a value that lives over a long period of time – i.e. is persisted in some storage system – then Y today is just the compounded result of the verbs applied to it from its date of creation.

There are two main things I watch for as I am looking into my verbs and how to compound them and apply them to my nouns. The first is to always include the ability to create an ad-hoc verb “by hand”. By which I mean, always having the ability to edit the data in nouns using purely interactive means. This is especially important in systems where down-time for the creation of new algorithmic verbs is not an option.

The second is watching out for feedback/recursion in verbs. Nothing generates complexity faster than feedback/recursion and when it is it used, it must be used with great care. I have a poster on my wall of a fractal with its simple mathematical formula written underneath it. It is incredible that such bottomless complexity can be derived from such a harmless looking feedback loop. Using it wisely can produce architectures capable of highly complex behaviors but with small surface areas and few moving parts. Used unwisely.....

Monday, May 21, 2018

Thinking about Software Architecture & Design : Part 9

I approach software architecture through the medium of human language. I do make liberal use of diagrams but the diagrams serve as illustrators of what, to me, is always a linguistic conceptualization of a software architecture. In other words, my mental model is nouns and verbs and adjectives and adverbs. I look for nouns and verbs first. This is the dominant decomposition for me. What are that things that exist in the model? Then I look for what actions are performed on/by the things in the model. (Yes, the actions are also “things” after a fashion...)

This first level of decomposition is obviously very high level and yet, I find it very useful to pause at this level of detail and do a gap analysis. Basically what I do is I explain the model to myself in my head and look for missing nouns and verbs. Simple enough.

But then I ask myself how the data that lives in the digital nouns actually gets there in the first place. Most of the time when I do this, I find something missing in my architecture. There are only finite number of ways data can get into a digital noun in the model. A user can enter it, an algorithm can compute it, or an integration point can supply it. If I cannot explain all the data in the model through the computation/input/integration decomposition, I am most likely missing something.

Another useful question I ask at this level of detail relates to where the data in the nouns goes outside the model. In most models, data flows out at some point to be of use i.e. it hits a screen or a printout or an outward bound integration point. Again, most of the time when I do this analysis, I find something missing – or something in the model that does not need to be there at all.

Getting your nouns and verbs straight is a great first step towards what will ultimately take the form of objects/records and methods/functions/procedures. It is also a great first step if you are taking a RESTian approach to architecture as the dividing line between noun-thinking and verb-thinking is the key difference between REST and RPC in my experience.

It is hard to avoid prematurely clustering nouns into types/classes as our brains appear to be wired towards organizing things into hierarchies. I do this because I find that as soon as I start thinking hierarchically, I close off the part of my brain that is open to alternative hierarchical decompositions. I try to avoid that because in my experience, the set of factors that steer an architecture towards one hierarchy instead of another are practical ones, unrelated to “pure” data modelling. i.e. concerns related to organizational boundaries, integration points, cognitive biases etc.

Take the time to explore as many noun/verb decompositions as you can because as soon as you pick one and start to refine the model, it becomes increasingly hard to think “outside the box” of your own architecture.

Friday, May 18, 2018

Thinking about Software Architecture & Design : Part 8


It is common practice to communicate software architectures using diagrams, but most diagrams, in my experience are at best rough analogies of the architecture rather than faithful representations of it.

All analogies break down at some point. That is why we call them “analogies”. It is a good idea to understand where your analogies break down and find ways to compensate.

In my own architecture work, the main breakdown point for diagrams is that architectures in my head are more like movies than static pictures. In my minds eye, I tend to see data flowing. I tend to see behaviors – both human and algorithmic – as animated actors buzzing around a 3D space, doing things, producing and consuming new data. I see data flowing, data flowing out, data staying put but changing shape over time, I see feedback loops where data flows out but then comes back in again. I see the impact of time in a number of different dimensions. I see how it relates to the execution paths of the system. I see how it impacts the evolution of the system as requirements change. I see how it impacts the dependencies of the system that are outside of my control e.g. operating systems etc.

Any static two dimensional picture or set of pictures, that I take of this architecture necessarily leaves a lot of information behind. I liken it to taking a photo of a large city at 40,000 feet and then trying to explain all that is going on in that city, through that static photograph. I can take photos from different angles and that will help but, at the end of the day, what I would really like is a movable camera and the ability to walk/fly around the “city” as a way of communicating what is going on in it, and how it is architected to function. Some day...

A useful rule of thumb is that most boxes, arrows, straight lines and layered constructions in software architecture diagrams are just rough analogies. Boxes separating say, organizations in a diagram, or software modules or business processes are rarely so clean in reality. A one way arrow from X to Y is probably in reality a two way data flow and it probably has a non-zero failure rate. A straight line separating, say “valid” from “invalid” data records probably has a sizable grey area in the middle for data records that fuzzily sit in between validity and invalidity. And so on.

None of this is in any way meant to suggest that we stop using diagrams to communicate and think about architectures. Rather, my goal here is just to suggest that until we have better tools for communicating what architectures really are, we all bear in mind the limited ability of static 2D diagrams to accurately reflect them.

Thursday, May 10, 2018

Thinking about Software Architecture & Design : Part 7


The temptation to focus a lot of energy on the one killer diagram that captures the essence of your architecture is strong. How many hours have I spent in Visio/Powerpoint/draw.io on “the diagram”? More than I would like to admit to.

Typically, I see architectures that have the “main diagram” and then a series of detail diagrams hidden away, for use by implementation and design teams. The “main diagram” is the one likely to go into the stakeholder presentation deck.

This can works fine when there are not many stakeholders and organizational boundaries are not too hard to traverse. But as the number of stakeholders grows, the power of the single architectural view diminishes. Sometimes, in order to be applicable to all stakeholders, the diagram becomes so generic that it really says very little i.e. the classic three-tiered architecture or the classic hub-and-spoke or the peer-to-peer network. Such diagrams run the risk of not being memorable by any of the stakeholders, making it difficult for them to get invested in it.

Other times, the diagram focuses on one particular “view” perhaps by putting one particular stakeholder role in the center of the diagram, with the roles of the other stakeholders surrounding the one in the middle.

This approach can be problematic in my experience. Even if you take great pains to point out that there is no implied hierarchy of importance in the arrangement of the diagram, the role(s) in the middle of the diagram will be seen as more important. It is a sub-conscious assessment. We cannot help it. The only exception I know of is when flow-order is explicit in the diagram but even then whatever is in the middle of the diagram draws our attention.

In most architectures there are “asks” of the stakeholders. The best way to achieve  these “asks” in my experience is to ensure that each stakeholder gets their own architecture picture, that has their role in the center in the diagram, with all other roles surrounding their part in the big picture.

So, for N stakeholders there are N "main views" - not just one. All compatible ways of looking at the same thing. All designed to make it easier for each stakeholder to answer the “what does this mean for me?” question which is always there – even if it is not explicitly stated.

Yes, it is a pain to manage N diagrams but you probably have them anyway – in the appendices most likely, for the attention of the design and implementation phase. My suggestion is to take them out of the appendices and put them into the stakeholder slide deck.

I typically present two diagrams to each stakeholder group. Slide one is the diagram that applies to all stakeholders. Slide two is for the stakeholder group I am presenting to. As I move around the different stakeholder meetings, I swap out slide number two.


Tuesday, May 08, 2018

Thinking about Software Architecture & Design : Part 6


Abstractions are a two-edged sword in software architecture. Their power must be weighed against their propensity to ask too much from stakeholders, who may not have the time or inclination to fully internalize them. Unfortunately most abstractions require internalization for appreciation of their power.

In mathematics, consider the power of Euler's equation and contrast it with the effort involved to understand what its simple looking component symbols represent. In music, consider the power of the Grand Staff to represent musical compositions and contrast that with the effort required to understand what its simple looking symbols represent.

Both of these abstractions are both very demanding and very powerful. It is not uncommon in software architecture for the practitioner community to enjoy and be comfortable with constantly internalizing new abstractions. However, in my experience, a roomful of software architects is not representative of a roomful of stakeholders in general.

Before you release your killer abstractions from the whiteboard into Powerpoint, try them out on a friendly audience of non-specialists first.

Wednesday, May 02, 2018

Thinking about Software Architecture & Design : Part 5

Most architectures will have users at some level or other. Most architectures will also have organisational boundaries that need to be crossed during information flows.

Each interaction with a user and each transition of an organizational boundary is an “ask”. i.e. the system is asking for the cooperation of some external entity. Users are typically being asked to cooperate by entering information into systems. Parties on the far end of integration points are typically being asked to cooperate by turning around information requests or initiating information transfers.

It is a worthwhile exercise while creating an architecture, to tabulate all the “asks” and identify those that do not have associated benefits for those who are performing the asks.

Any entities interacting with the system that are giving more than they are receiving, are likely to be the most problematic to motivate to use the new system. In my experience, looking for ways to address this at architecture time can be very effective and sometimes very easy. The further down the road you get towards implementation, the harder it is to address motivational imbalances.

If you don't have  an answer to the “what is in it for me?” question, for each user interaction and each integration point interaction, your architecture will face avoidable headwinds both in  implementation and in operation.