Before you can be strategic you have to be competent.

That’s according to Keep the Joint Running: A Manifesto for 21st Century Information Technology, (me, 2012), the source of all IT management wisdom worth wisdoming.

An unglamorous but essential ingredient of IT organizational competence is software quality assurance (SQA), the nuts-and-bolts discipline that makes sure a given application does what it’s supposed to do and doesn’t do anything else.

SQA isn’t just one practice. It’s several. It checks:

Software engineering – whether code adheres to the overall system architecture, is properly structured, and conforms to coding style standards.

Unit testing – whether a module correctly turns each possible input into the expected output.

Integration testing – whether a module interacts properly with all the other modules the team is creating.

Regression testing – whether the new modules break anything that’s already in production.

Stress testing – whether the whole system will perform well enough once everyone starts to bang on it.

User acceptance – whether the new modules are aesthetically pleasing enough; also, whether they do what the business needs them to do – do they, that is, effectively support, drive, and manage the business processes they’re supposed to support, drive, and manage.

Ideally, IT’s SQA function will establish and maintain automated test suites for all production applications and keep them current, to ensure efficient and correct unit, integration, regression, and stress testing.

In practice, creating and managing automated test suites is really, really hard.

This looks like a fabulous opportunity for generative AI, doesn’t it? Instead of asking it to generate a mathematical proof in the style of William Shakespeare, point your generative AI tool of choice to your library of production application code and tell it to … generate? … an automated test suite.

Generative AI, that is, could take one of the most fundamental but time-consuming and expensive aspects of IT competence and turn it into a button-push.

Brilliant!

Except for this annoying tidbit that’s been an issue since the earliest days of “big data,” generative AI’s forgotten precursor: How to perform SQA on big data analytics, let alone on generative AI’s responses to the problems assigned of it.

Way, way, way back we had data warehouses. Data warehouses start with data cleansing, so your business statisticians could rely on both the content and architecture of the data they analyzed.

But data warehouse efforts were bulky. They took too long, were anything but flexible, and frequently collapsed under their own weight, which is why big data, in the form of Hadoop and its hyperscale brethren, became popular. You just dumped your data into some data lakes, deferring data cleansing and structuring … turning that data into something analyzable … until the time came to analyze it. It was schema on demand, shifting responsibility from the IT-based data warehouse team to the company’s newly re-named statisticians, now “data scientists.”

The missing piece: SQA.

In scientific disciplines, researchers rely on the peer review process to spot bad statistics, along with all the other flaws they might have missed.

In a business environment, responsibility for detecting even such popular and easily anticipated management practices as solving for the number has no obvious organizational home.

Which gets us to this week’s conundrum. We might call it SQA*2. Imagine you ask your friendly generative AI to automagically generate an automated test suite. It happily complies. The SQA*2 challenge? How do you test the generative AI’s automated test suite to make sure the flaws it uncovers are truly flaws, and that it doesn’t miss some flaws that are present – feed it into another generative AI?

Bob’s last word: It’s easy, and gratifying, to point out all the potential gaps, defects, fallacies, and potential pitfalls embedded in generative-AI implementations. In the generative-AI vs human beings competition, we can rely on confirmation bias to assure ourselves that generative-AI’s numerous flaws will be thoroughly explored.

But even in the technology’s current level of development, we Homo sapiens need to consider the don’t-have-to-outrun-the-bear aspect of the situation:

Generative-AI doesn’t have to be perfect. It just has to be better at solving a problem than the best human beings are.

This week’s SQA*2 example … the automated generation of automated test suites … exemplifies the challenge we carbon-based technologies are going to increasingly face as we try to justify our existence given our silicon-based competition.

Bob’s sales pitch: You are required to read Isaac Asimov’s short story in which he predicts the rise of generative AI. Titled “The Jokester,” it’s classic Asimov, and well worth your time and attention (and yes, I did say the same thing twice).

Now on CIO.com’s CIO Survival Guide: 5 IT management practices certain to kill IT productivity.” What’s it about? The headline is accurate.

I’m still on vacation (and will be for another week). I won’t be in a position to post a re-run tomorrow, so I’m sending this one out early. I don’t think anything in it has become at all stale, so give it a read even though you might remember it from 10 years ago. – Bob

# # #

Remember the rule from the KJR Manifestothat there’s no such thing as an IT project — they’re all business change projects that make use of information technology?

It’s just as true for the projects that result in so-called “shadow IT” — the information technology that happens without IT’s direct involvement. And because it’s shadow IT, the folks who ask for it know this. They’re looking for business improvement — that’s where their thought process starts. The linkage is automatic.

Last week’s column explained why IT should start supporting shadow IT. But that isn’t enough. We need to support shadow projects as well … the too-small-to-notice-but-too-important-to-let-fail projects business managers charter to make their shadow IT happen, and also to make all kinds of other stuff happen too.

Let’s imagine, for the sake of argument, that your company has established a PMO or EPMO ([enterprise] program management office). If it’s like most PMOs, the company’s project managers all report there, and one of the rules is that all company projects must be managed by its trained project managers. That way, the company doesn’t risk investing in projects that are managed poorly.

Sounds a lot like the arguments against shadow IT, doesn’t it? Like those arguments, the driving force is risk reduction, but the actual impact is mostly opportunity avoidance.

Limiting the number of projects a business can take on to the number of available project managers artificially limits the company’s capacity for change. And when it comes to change, any bottleneck other than the company’s ability to absorb it is inappropriately limiting — a decision to adapt and improve more slowly than necessary.

Which is why, in so many companies that have established an official PMO or EMPO, business managers charter lots of under-the-radar projects.

The shadow project situation sounds more and more like shadow IT, doesn’t it?

On the whole, shadow projects have less risk and yield higher returns than most of the official projects in the company’s portfolio, a natural consequence of their being small, short, tightly focused, and properly sponsored.

Yes, properly sponsored, something that’s more-often true of shadow projects than official ones, because shadow projects are started by business managers who personally want them to succeed. This makes them sponsors … real sponsors, by definition … and the importance of sponsorship in effective project management is well known.

Just in case: Real sponsors want their projects to succeed enough to stick their necks out and take risks when necessary to support their project-manager partners. That’s in contrast to assigned sponsors, who are thrown in front of official projects, just because the methodology says every project has to have one. Assigned sponsors don’t really care, because why would they?

So shadow projects have less risk than their formally chartered brethren. Except for one thing: They’re mostly led by employees who, while promising, have no project management training or previous experience. Their managers/sponsors, themselves usually unaware of what project management actually takes, tell them, “This will be a terrific development opportunity for you,” ManagementSpeak for “There’s a bus approaching at high speed!” followed by a shove.

The result is that right now, many shadow projects aren’t managed as projects at all, because the employees who are put in charge of them have never managed a project and have no idea where to start.

They need help.

So here’s a thought: Instead of trying to stamp out these shadow projects the way IT used to try to stamp out shadow IT, why not provide some support?

Like, for example, giving about-to-be-run-over-by-a-bus neophyte project managers some tools and training, instead of treating them like orphan stepchildren. The secret, and the challenge: Those best equipped to provide the tools and training know too much about the subject. They know, that is, the techniques needed to implement SAP, erect a skyscraper, or build a nuclear submarine.

What many of them don’t know is which of those techniques can be safely jettisoned when the task at hand is managing a team of three people for a few months — at a rough guess, 90% of their expertise. As is so often but so strangely the case, scaling something down can be harder than scaling it up.

Still, it can be done, and doing it is important. In the aggregate, shadow projects add up, even if no one of them is a big hairy deal.

If the PMO/EPMO reports inside IT, the CIO can make shadow project support part of its charter. If not, there’s no reason IT can’t provide it on its own.

Which is a nice irony: Where IT used to do its best to stamp out shadow activities, it has just become an active conspirator in them.