Yes, it’s another re-run. I have a great excuse for this, but I’m not going to tell you what it is.

Anyway, I think you’ll like this one, and it’s old enough there’s good chance you weren’t a subscriber when it first ran (June 24, 2013).

Please enjoy the article while I enjoy my excuse.

Bob      

We need more engineers.

Not just in IT, bridge design, and electronics. We need them everywhere decisions are being made.

I’m not limiting engineers to those who know the niceties of load and stress computations, CPU design, gear ratios and such. No, to me, an engineer is anyone who understands you can’t cool down the kitchen by leaving the refrigerator door open.

Many business executives aren’t engineers, but they should be. Here’s what I mean:

Cooling off the kitchen is a metaphor for across-the-board cost-cutting.

The hot air blowing out of the back of the refrigerator is parallel to the impact of the all-too-common across-the-board cost-cutting that impairs the business more than it saves anything.

Any engineer knows all refrigerators do is pump heat out of an enclosed space and into a larger space. Leave the door open and the larger space and enclosed space become one and the same: The cooled air just mixes with the heat being pumped into the same space – even with perfect efficiency the net effect is zilch.

This describes most cost-cutting exercises pretty well, and especially cuts to the IT budget, because when costs are already too high, less automation probably won’t help. More might not help either, depending on what exactly is causing costs to be too high. But unless what’s being cut out of the IT budget are stupid ideas that shouldn’t have been approved in the first place, forcing employees to operate either manually or with obsolete technology just isn’t going to increase efficiency.

Which gets to the heart of why we need more engineers: Engineers generally think in terms of fixing problems rather than symptoms. So should business decision-makers.

So if a business is in trouble … if costs are too high … decision-makers need to first ask themselves some basic questions, like, are costs really too high? Or is revenue too low? Or is risk what’s too high, it isn’t being managed well, and as a result expensive problems that could have been prevented haven’t been?

Too many business executives act as if “our costs aren’t in line with our revenues” is a proper root cause analysis when profits are unsatisfactory. An engineer would insist on knowing how the business is supposed to work; then on identifying which of its moving parts aren’t moving as they should; and then on fixing the parts that are broken.

So if the problem is actually revenue, an engineer would determine whether the root cause is uncompetitive products, customer disservice, unappealing marketing and advertising, or a sales force that isn’t very good at selling. And fix it.

If the problem really is excessive cost, an engineer would figure out whether the root cause is cumbersome and inefficient processes, obsolete tools and technology that force employees into cumbersome and inefficient processes, poorly trained and unmotivated employees, or something else. And then the engineer would fix the actual problem.

Understand, this might sound simple. Conceptually, it is simple.

But it’s nothing like simple, because underneath it is the need for a clear understanding of how the business works … of the buttons and levers company management can push and pull that lead customers to buy products and services in acceptable volumes and margins.

This isn’t always complicated, but it can be complicated enough to tempt decision-makers to cut out a step or two. For example, one day in the distant past, the executives leading a not-entirely mythical automobile manufacturer discovered they could bump up the company’s profits by financing the purchase of their cars instead of leaving that business to the local banks.

Then they discovered they could sell more cars by discounting them, making up the slack by making more loans. And then they “discovered” the company was really in the financing business, and cars became just a means to an end.

Which led to its executives no longer caring whether their cars were desirable products. Instead it designed and built cars some people were willing to buy if they were cheap enough.

Which led to round after round of pointless cost-cutting, because cost wasn’t the company’s problem.

What was? It had forgotten how its business worked: With no cars to sell that people wanted to buy it ground to a halt, even though its profits no longer came from car sales.

The company’s execs outsmarted themselves.

It’s why we need engineers.

So maybe you should add an interview question for managerial candidates: “If you open the refrigerator door, how much will it cool off your kitchen?”

Before you can be strategic you have to be competent.

That’s according to Keep the Joint Running: A Manifesto for 21st Century Information Technology, (me, 2012), the source of all IT management wisdom worth wisdoming.

An unglamorous but essential ingredient of IT organizational competence is software quality assurance (SQA), the nuts-and-bolts discipline that makes sure a given application does what it’s supposed to do and doesn’t do anything else.

SQA isn’t just one practice. It’s several. It checks:

Software engineering – whether code adheres to the overall system architecture, is properly structured, and conforms to coding style standards.

Unit testing – whether a module correctly turns each possible input into the expected output.

Integration testing – whether a module interacts properly with all the other modules the team is creating.

Regression testing – whether the new modules break anything that’s already in production.

Stress testing – whether the whole system will perform well enough once everyone starts to bang on it.

User acceptance – whether the new modules are aesthetically pleasing enough; also, whether they do what the business needs them to do – do they, that is, effectively support, drive, and manage the business processes they’re supposed to support, drive, and manage.

Ideally, IT’s SQA function will establish and maintain automated test suites for all production applications and keep them current, to ensure efficient and correct unit, integration, regression, and stress testing.

In practice, creating and managing automated test suites is really, really hard.

This looks like a fabulous opportunity for generative AI, doesn’t it? Instead of asking it to generate a mathematical proof in the style of William Shakespeare, point your generative AI tool of choice to your library of production application code and tell it to … generate? … an automated test suite.

Generative AI, that is, could take one of the most fundamental but time-consuming and expensive aspects of IT competence and turn it into a button-push.

Brilliant!

Except for this annoying tidbit that’s been an issue since the earliest days of “big data,” generative AI’s forgotten precursor: How to perform SQA on big data analytics, let alone on generative AI’s responses to the problems assigned of it.

Way, way, way back we had data warehouses. Data warehouses start with data cleansing, so your business statisticians could rely on both the content and architecture of the data they analyzed.

But data warehouse efforts were bulky. They took too long, were anything but flexible, and frequently collapsed under their own weight, which is why big data, in the form of Hadoop and its hyperscale brethren, became popular. You just dumped your data into some data lakes, deferring data cleansing and structuring … turning that data into something analyzable … until the time came to analyze it. It was schema on demand, shifting responsibility from the IT-based data warehouse team to the company’s newly re-named statisticians, now “data scientists.”

The missing piece: SQA.

In scientific disciplines, researchers rely on the peer review process to spot bad statistics, along with all the other flaws they might have missed.

In a business environment, responsibility for detecting even such popular and easily anticipated management practices as solving for the number has no obvious organizational home.

Which gets us to this week’s conundrum. We might call it SQA*2. Imagine you ask your friendly generative AI to automagically generate an automated test suite. It happily complies. The SQA*2 challenge? How do you test the generative AI’s automated test suite to make sure the flaws it uncovers are truly flaws, and that it doesn’t miss some flaws that are present – feed it into another generative AI?

Bob’s last word: It’s easy, and gratifying, to point out all the potential gaps, defects, fallacies, and potential pitfalls embedded in generative-AI implementations. In the generative-AI vs human beings competition, we can rely on confirmation bias to assure ourselves that generative-AI’s numerous flaws will be thoroughly explored.

But even in the technology’s current level of development, we Homo sapiens need to consider the don’t-have-to-outrun-the-bear aspect of the situation:

Generative-AI doesn’t have to be perfect. It just has to be better at solving a problem than the best human beings are.

This week’s SQA*2 example … the automated generation of automated test suites … exemplifies the challenge we carbon-based technologies are going to increasingly face as we try to justify our existence given our silicon-based competition.

Bob’s sales pitch: You are required to read Isaac Asimov’s short story in which he predicts the rise of generative AI. Titled “The Jokester,” it’s classic Asimov, and well worth your time and attention (and yes, I did say the same thing twice).

Now on CIO.com’s CIO Survival Guide: 5 IT management practices certain to kill IT productivity.” What’s it about? The headline is accurate.