I’m working on a (probably) three-part sequence on technical architecture, to be part of the IT Management 101 series I’m writing for CIO.com. As a famous person once said about health care, who knew architecture is so complicated?

This isn’t a substitute for it. It’s more along the lines of stray thoughts you might find helpful in assessing and managing technical architecture in your own organization.

Beware the seductive siren call of metaphor. The parallels between technical architecture and what professional building designers do are limited at best, and dangerous at worst.

The work of professional architects begins with a sketch and ends with blueprints. Technical architects don’t create blueprints, and if they did they would be embracing waterfall methodologies.

Agile methodologies don’t rely on blueprints of any kind. They often do rely on the equivalent of a sketch, but if so it’s the business analyst / internal consultant who draws it.

Crowdsourcing is a dicey way to gather data. Given how much information you’re going to want about each component in your portfolios, crowdsourcing it … sending out questionnaires to subject matter experts … is tempting.

Given that many enterprises can have a thousand or more components across all of their portfolios, crowdsourcing might not just be tempting – it might be unavoidable.

So if you do crowdsource your data-gathering, make sure you educate all of your information sources in the nuances of what you’re looking for.

And, assuming they do complete your questionnaires, curate the daylights out of the information they provide.

Version is data. Currency is information. You should include in your technical architecture database how current each component is, “current” meaning whether it matches what the vendor currently ships (fully current) or, descending through the possibilities, whether it has fallen out of support (obsolete).

Keeping track of which version of a component is deployed in production is relatively straightforward – just make sure than any time the responsible team installs an update they know to update the architecture database to match.

But what you care about is how current the component is, and you can only determine that if you know the product’s full version history, so you can match your production version to its position in that history.

Currency scores are, of course, perishable. As they change each time a vendor issues a new release, someone needs to keep track of this across every commercial product in every portfolio in your architecture.

It isn’t just your technology that has to stay current. You have to keep every piece of information you collect about each component in your architecture current, too.

You collect information about each component of your technical architecture. Some of it is constant. But quite a lot may change over time. For example, you’ll probably want to know how well each application supports the business functions it’s associated with. But business functions change, which means an application’s business function support score changes along with it.

So your information-gathering process has to operate on a cadence that balances the sheer effort required with the rate of decay of information accuracy.

Bob’s last word: Speaking of balancing effort and information it’s tempting to collect a lot of data about each component in the architecture. Tempting, that is, until you pivot from collecting it the first time to updating it on a regular cadence, over and over again.

In the framework I use, I’ve identified about 30 attributes just for the application layer of the architecture. That’s a starting point. An important part of the process is whittling them down to the essentials.

Because 30 is too big a number. Ten will usually do the trick.

Bob’s sales pitch: I’m still whittling down the CIO.com architecture articles to their essentials. I’ll let you know when they’re available for your reading enjoyment.

In the beginning there was dBase II, designated “II” by Ashton-Tate, its publisher, to convey a level of maturity beyond its actual virtues. It was followed in quick succession by Paradox, Delphi, and Microsoft Access, all of which overcame most of dBase II’s (and III’s, and especially IV’s) numerous limitations.

Compared to traditional programming languages these platforms increased developer productivity by approximately 10,000% compared to traditional COBOL coding – they let me get about a day’s worth of COBOL coding in five minutes or so.

This history was current events more than twenty years ago and yet IT shops still write code and enshrine the practice with various methodologies (Scrum, Kanban, DevOps, add-your-favorite-here) intended to improve IT’s overall app dev effectiveness.

Speaking of deja vu, the pundits who track such things write about no-code/low-code (NC/LC) development environments as if they’re something new and different – vuja de, like nothing they’ve seen before – when in fact they offer little their 1990s-vintage predecessors weren’t capable of way back when.

Should NCLC be in your future? Gartner says yes, predicting that by 2024, “… low-code application development will be responsible for more than 65% of application development activity.”

They make it so easy … to nitpick, is that 65% of all lines of code that will be produced using No Code tools? Probably not, as No Code tools by definition produce no code.

Function points? Maybe, except that nobody uses function points any more.

Probably, Gartner means 65% of all developer hours will be spent using NC/LC tools.

Which is simply wrong, on the grounds that most IT shops license when they can and only build when they have to. In my unscientific experience, looking at total application functionality as the metric, maybe 75% comes from COTS implementations (commercial, off-the-shelf software, which includes but isn’t limited to SaaS packages). Maybe 25% comes from in-house-developed custom applications, and that’s being generous.

As NC/LC platforms don’t touch COTS/SaaS functionality, it’s doubtful that work on 25% of the application portfolio can occupy 65% of all developer hours.

But I digress. The question isn’t whether Gartner has done it again. The question is how much attention IT should pay to this platform category.

Answer: If coding and unit testing are enough of a development bottleneck to care about, then yes. When it comes to optimizing any function, removing bottlenecks is generally a good idea.

Second answer: If in your company DIY application development is a source of a lot of application functionality, then selecting an NC/LC standard, integrating it with your application portfolio’s systems-of-record APIs, and providing training in its use will save everyone involved from a lot of headaches, while removing a source of friction and conflict between IT and the rest of the business.

Third answer: Most COTS/SaaS applications have some sort of no-code or low-code toolkit built into them. These should be IT’s starting point for moving in the NC/LC direction, and for that matter for any new application functionality.

Bob’s last word: It’s easy to fall into the trap of answering the question someone asks. “Are NC/LC tools useful and ready for prime time?” is an example, and shows why Dr. Yeahbut makes frequent appearances in this space.

The answer to the question is, in fact, “Yeah, but.” NC/LC development should, I think, be part of the IT app dev toolkit. But mastering the tools needed to integrate, configure, enhance, and extend the company’s COTS application suites has, for most IT organizations, far more impact on overall IT app dev effectiveness than anything in the way of app dev tools.

Bob’s sales pitch: As a member of the KJR community you might enjoy my most recent contribution to CIO.com, and a podcast I was interviewed for.

The CIO.com article is titled “The hard truth about business-IT alignment.” You’ll find it here.

The interview was for Greg Mader’s Open and Resilient podcast and covered a number of KJR sorts of topics. You’ll find it here.