I don’t get it.
I just read Lucas Carlson’s excellent overview of microservices architecture in InfoWorld. If you want an introduction to the subject you could do far worse, although I confess, it appears microservices architecture violates what I consider to be one of the fundamentals of good architecture. (It’s contest time: The first winning guess, defined as the one I agree with the most, will receive a hearty public virtual handshake from yours truly.)
My concern isn’t about the architectural value of microservices vs its predecessors. It’s that by focusing so much attention on it, IT ignores what it spends most of its time and effort doing.
Microservices, and DevOps, to which it’s tied at the hip, and almost all variants of Agile, to which DevOps is tied at the ankle, and Waterfall, whose deficiencies are what have led to Agile’s popularity, all focus on application development.
WAKE UP!!!!! IT only develops applications when it has no choice. Internal IT mostly buys when it can and only builds when it has to. Knowing how to design, engineer and build microservices won’t help you implement SAP, Salesforce, or Workday, to pick three examples out of a hat. Kanban and Scrum might be a bit more helpful, but not all that much. The reasons range from obvious to abstruse.
On the obvious end of the equation, when you build your own solutions you have complete control of the application and information architecture. When you buy solutions you have no control over either.
Sure, you can require a microservices foundation in your RFPs. Good luck with that: The best you can successfully insist on is full access to functionality via a RESTful (or SOAPy, or JSON-and-the-Argonauths) API.
Halfway between obvious and abstruse lies the difference in cadence between programming and configuration, and its practical consequences.
Peel away a few layers of any Agile onion and you’ll find a hidden assumption about the ratio of time and effort needed to specify functionality … to write an average-complexity user story … and the time needed to program and test it. The hidden assumption is that programming takes a lot longer than specification. It’s a valid assumption when you’re writing Angular, or PHP, or Python, or C# code.
It’s less valid when you’re using a COTS package’s built-in configuration tools, which are designed to let you tweak what the package does with maximum efficiency and minimum risk that the tweak will blow up production. The specify-to-build ratio is much closer to 1 than when a team is developing software from scratch, which means Scrum, with its user-story writing and splitting, backlog management, and sprint planning, imposes more overhead that needed.
And that ignores the question of whether each affected business area would find itself more effective by adopting the process that’s built into the COTS package instead of spending any time and effort adapting the COTS package to the processes they use at the moment.
At the full-abstruse end of the continuum lies the challenge of systems integration that’s lying in the weeds there, waiting to nail your unwary implementation teams.
To understand the problem, go back to Edgar Codd and his “twelve” laws of relational data normalization (there are thirteen of them; his numbering starts at zero). Codd’s framework for data normalization is still the touchstone for IT frameworks and methodologies of all kinds, and just about all of them come up short in comparison.
Compare the process we go through to design a relational database with the process we go through to integrate and synchronize the data fields that overlap among the multiple COTS and SaaS packages your average enterprise needs to get everything done that needs to get done.
As a veteran of the software wars explained to me a long time ago, software is just an opinion. Which means that if you have three different packages that manage employee data, you bought three conflicting opinions of what’s important to know about employees and how to represent it.
Which in turn means synchronizing employee data among these packages isn’t as simple as “create a metadata map” sounds when you write the phrase on a PowerPoint slide.
To the best of my knowledge, nobody has yet created an integration architecture design methodology.
Which shouldn’t be all that surprising: Creating one would mean creating a way to reconcile differing opinions.
And that’s a shame, because a methodology for resolving disagreements would have a lot more uses than just systems integration.
Computer as organization infrastructure vs computers as tools, when modern computing necessarily has to be both.
Back in the day, computers were only an important way to automate and scale certain organizational accounting and payroll functions. This generally replicated successful manual procedures, already in place.
However, with the increases in local computing power and user development expertise, comes computers as tools. In this role, computers take on a value-added function for individual users and workgroups within the organization, supporting local analysis, which is usually involves context dependent analysis, using seldom repeated procedures.
I would think that ideally, the infrastructure computer would have become the repository of product, sales, and customer information in addition to accounting functions. But none of the infrastructure accounting software packages designed 15 years ago that I know of are really designed to be that kind of repository.
Where this is true, I don’t think it’s correct for IT to say about the infrastructure computer, only doing accounting and payroll, that “if it ain’t broke, don’t fix it”. If it’s not a data repository, as described above, then it is “broke”, and IT has to make the change, expensive as it will probably be.
Otherwise, it becomes the vexing “people” problem you wrote of, with no obvious (to me) happy ending.
The single biggest objection I have to in-house/custom systems is the long-tail back-end. That is, how it gets supported in the future. In my experience, at some point these become black boxes, that nobody knows what they do, or how. Just that if anything happens, things will stop working like they need to (and life as we know it will end).
Usually the best technical explanation is “when Joe worked here ten years ago, he did something with this stuff…”
Sorry. You lost me on this one. I don’t get what the point you were trying to make is. Looks like a shotgun blast to me.