Before you can be strategic you have to be competent.
That’s according to Keep the Joint Running: A Manifesto for 21st Century Information Technology, (me, 2012), the source of all IT management wisdom worth wisdoming.
An unglamorous but essential ingredient of IT organizational competence is software quality assurance (SQA), the nuts-and-bolts discipline that makes sure a given application does what it’s supposed to do and doesn’t do anything else.
SQA isn’t just one practice. It’s several. It checks:
Software engineering – whether code adheres to the overall system architecture, is properly structured, and conforms to coding style standards.
Unit testing – whether a module correctly turns each possible input into the expected output.
Integration testing – whether a module interacts properly with all the other modules the team is creating.
Regression testing – whether the new modules break anything that’s already in production.
Stress testing – whether the whole system will perform well enough once everyone starts to bang on it.
User acceptance – whether the new modules are aesthetically pleasing enough; also, whether they do what the business needs them to do – do they, that is, effectively support, drive, and manage the business processes they’re supposed to support, drive, and manage.
Ideally, IT’s SQA function will establish and maintain automated test suites for all production applications and keep them current, to ensure efficient and correct unit, integration, regression, and stress testing.
In practice, creating and managing automated test suites is really, really hard.
This looks like a fabulous opportunity for generative AI, doesn’t it? Instead of asking it to generate a mathematical proof in the style of William Shakespeare, point your generative AI tool of choice to your library of production application code and tell it to … generate? … an automated test suite.
Generative AI, that is, could take one of the most fundamental but time-consuming and expensive aspects of IT competence and turn it into a button-push.
Brilliant!
Except for this annoying tidbit that’s been an issue since the earliest days of “big data,” generative AI’s forgotten precursor: How to perform SQA on big data analytics, let alone on generative AI’s responses to the problems assigned of it.
Way, way, way back we had data warehouses. Data warehouses start with data cleansing, so your business statisticians could rely on both the content and architecture of the data they analyzed.
But data warehouse efforts were bulky. They took too long, were anything but flexible, and frequently collapsed under their own weight, which is why big data, in the form of Hadoop and its hyperscale brethren, became popular. You just dumped your data into some data lakes, deferring data cleansing and structuring … turning that data into something analyzable … until the time came to analyze it. It was schema on demand, shifting responsibility from the IT-based data warehouse team to the company’s newly re-named statisticians, now “data scientists.”
The missing piece: SQA.
In scientific disciplines, researchers rely on the peer review process to spot bad statistics, along with all the other flaws they might have missed.
In a business environment, responsibility for detecting even such popular and easily anticipated management practices as solving for the number has no obvious organizational home.
Which gets us to this week’s conundrum. We might call it SQA*2. Imagine you ask your friendly generative AI to automagically generate an automated test suite. It happily complies. The SQA*2 challenge? How do you test the generative AI’s automated test suite to make sure the flaws it uncovers are truly flaws, and that it doesn’t miss some flaws that are present – feed it into another generative AI?
Bob’s last word: It’s easy, and gratifying, to point out all the potential gaps, defects, fallacies, and potential pitfalls embedded in generative-AI implementations. In the generative-AI vs human beings competition, we can rely on confirmation bias to assure ourselves that generative-AI’s numerous flaws will be thoroughly explored.
But even in the technology’s current level of development, we Homo sapiens need to consider the don’t-have-to-outrun-the-bear aspect of the situation:
Generative-AI doesn’t have to be perfect. It just has to be better at solving a problem than the best human beings are.
This week’s SQA*2 example … the automated generation of automated test suites … exemplifies the challenge we carbon-based technologies are going to increasingly face as we try to justify our existence given our silicon-based competition.
Bob’s sales pitch: You are required to read Isaac Asimov’s short story in which he predicts the rise of generative AI. Titled “The Jokester,” it’s classic Asimov, and well worth your time and attention (and yes, I did say the same thing twice).
Now on CIO.com’s CIO Survival Guide: “5 IT management practices certain to kill IT productivity.” What’s it about? The headline is accurate.
Bob, have you written anything about the “new” fashion of continuous delivery? My son (Soft Eng, in mid 30s) has a number of colleagues in various FANG and FANG like companies that are very excited about continuous delivery. I come from a Big Telecom background and we had releases that were huge. These came out every 3-6 months with at least that length of SQA testing through huge regression test cases. We could not afford to crash the switch and deny phone service to a hundred thousand households. Thus we had a tightly version controlled release that had been pummeled into cooperation. How does SQA live with continuous delivery? Or is continuous delivery not something we should use with mission critical software (SCADA for example).
Sure I have, for example, here: https://issurvivor.com/2015/05/18/devops-benefits-hand-waving-and-fringe/ . Very short version: Huge releases require broad and deep SQA. Continuous integration / continuous delivery – DevOps’s CICD) – doesn’t, on the grounds that each change that’s deployed is small enough that missing a critical bug is far less likely. Also, DevOps-style CICD depends on the use of automated test suites to catch defects.
Well it kind of goes into software engineering but it kind of is also something people don’t realize is part of the software engineering process and skip. And that is the documentation necessary to maintain the software.
That seems to be something skipped and awful lot. Our world doesn’t realize that software will have to be maintained updated and changed over time to meet new requirements, or desires.
I had an instructor in Pascal who did probably the best thing that ever happened to me in college as far as getting an education. For the third program in your first semester of Pascal programming, he told you that you absolutely had to maintain a copy of the program and had to keep it and you were going to later need it and Lead it very badly. The third semester I.E the first semester of your second year, the first program in that class was to maintain your code from the third program in the first class and make specific changes to it. It had to be your program, if you hadn’t kept a copy you got a zero. Boy did you learn a lot about proper programming practices and documentation they are because after a year you completely forgot what it was you did in that pretty darn simple program. And that was only one person programming.
When we talk about software engineering we toss in a little bit about this. But we kind of gloss over the need to have good maintainable code.
And you kind of skipped on the last one a little bit or didn’t hit enough about user acceptance also being user ability to understand how to use the program. IE user documentation. That’s a big part of software quality.
Oh by the way both of these are an even bigger part of hardware systems. And do we even want to talk how important proper documentation and user instruction is for Network systems. 😉 actually I think we absolutely do. Because that’s my area and believe me I see it almost never gets done.
It strikes me that as it evolves and additional capabilities are built in, that generative AI will gain the ability to read code and generate documentation. Until that halcyon day, I agree with you that it would make sense to include a documentation review in the SQA process. Thanks for making this excellent point.
I’ve read the Asimov story now. The title seemed vaguely familiar; I now realize that I read this story many years ago, and it is as striking now as it was then.
One of the themes that’s being emphasized today by everybody commenting on the new Generative AI’s: there is suddenly a valuable new skill for everyone to learn, the art of Prompt Formulation. The last time I went through a new-skill-learning project like this, was learning the art of Google Query Formulation. In Asimov’s story, the skill that makes Grand Masters important and valuable is essentially Prompt Formulation. I really like — and remember liking from LAST time around — the notion in the story that all of the REASONABLE questions have already been asked and answered, therefore ALL the remaining questions worth asking are seemingly UNREASONABLE questions.
The story’s revelation (SPOILER ALERT!) that NOBODY ever writes an original joke isn’t quite right. Even if JOKES specifically are all memes that are introduced from outside, HUMOR more generally is being invented in vast quantities constantly by everybody. For example, this weekend, I myself was inspired by a bizarre and surreal-seeming news report to compose a satirical poem that I am VERY proud of.
The Dallas school district has created a book — complete with water-color artwork, and a Pooh-style poem — that uses Winnie-the-Pooh and his friends to teach young children how to try to avoid getting murdered by an Active School Shooter. The Pooh stories were written and copyrighted so long ago that the characters have now fallen into the public domain, so that ANYONE may now legally write and publish, and earn money from, new ORIGINAL Winnie-the-Pooh stories. A horror movie starring Pooh and Piglet (as murderous monsters) has already been produced and released (and it flopped).
https://www.nytimes.com/2023/05/26/us/winnie-the-pooh-school-shooting-book.html
From the actual news story:
. “If danger is near, do not fear,” the book reads. “Hide like Pooh does until the police appear.”
My version (excerpts):
. At Hundred-Acre School, in the morning sun,
. here comes a nasty kid with a gun.
. The gun with a kid should NOT be there —
. and soon they’ll be met by a gun with a bear.
To solve the gun-violence problem, if the solution to a Bad Guy With A Gun is a GOOD Guy With A Gun, then it follows logically that a Good BEAR With A Gun will also work.
. The nasty kid approaches… his weapon gleams,
. with a lurid, blue-black metallic sheen.
. He feels so powerful! Then BLAM, he’s dead…
. from a single Pooh-shot, straight to his head.