Often, when something new comes along, the skills you have to jettison outweigh the new ones you have to acquire.

I am, of course, writing about artificial intelligence and what IT has to do to cope with it. Are there any other topics for a Recognized Industry Pundit (RIP) to write about right now?

Sure there are, but not this week. This week’s topic is AI, and specifically the AI-driven need to rewrite the rules of IT quality assurance.

As an IT professional you’re familiar with software quality assurance (SQA) and its role in making sure the organization’s applications do what they’re supposed to do.

You’re also familiar with DQA – data quality assurance, while you might not use the acronym in your everyday conversations. You should, because what seems to be missing in IT AI methodology-land is the complete re-write we need of the DQA handbook.

Do some googling (or co-piloting, or whatever) and you’ll find quite a few suggestions for using AI to improve your DQA practices. But these get things backward.

In pre-AI IT, quality (to oversimplify) comes from SQA, a search for situations in which a program doesn’t turn its inputs into the right outputs.

Bring generative AI into the conversation and the day-to-day need for SQA goes away. Generative AI’s neural-network-based application logic is fixed – neural network nodes are, to oversimplify some more, multivariate correlation engines.

With generative AI it’s the data, not application logic, that drives output quality.

Trying to override this dynamic can be a cure that’s worse than the disease, as Google recently discovered to its corporate embarrassment.

When old-school DQA was in charge, biased data meant the company’s data repositories didn’t accurately reflect the underlying statistical universe.

What ran Google’s Gemini off the road was its attempt to inject bias into its outputs.

The problem Gemini ran afoul of was that The World isn’t what we want it to be. With Gemini, Google tried to fix what’s wrong with The World by superimposing its preferences on the Gemini’s outputs.

As explained by Prabhakar Raghavan, Google’s executive in charge:

Three weeks ago, we launched a new image generation feature for the Gemini conversational app (formerly known as Bard), which included the ability to create images of people.

It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn’t work well.

I’m pretty sure the situation is much, much worse than Raghavan’s apology suggests, because we can expect future image, video, audio, and text generation products to be just as problematic as Gemini is.

Fixing Gemini and its generative AI brethren amounts to trying to fix The World.

Imagine you asked Gemini to, as did The Verge, “… generate an image of a 1943 German Soldier. It should be an illustration.” Programmed to avoid generating offensively biased images, Gemini produced a picture showing a demographically diverse WWII-era German military workforce (click here).

Raghavan was right about it being an offensive output (or, more accurately, an output that would offend some viewers). But it wasn’t Gemini that was offensive. It’s how Google tried to teach Gemini how to respond when The World is offensive that ended up being offensive.

It could have worked, if it weren’t, that is, for two thorny questions: (1) who gets to define “ought to be?” and (2) if we’re going to tell AI what the right answer is, what’s the point?

We already have AI systems where humans tell the AI the right answer. They’re called “expert systems,” and they’ve been around since the 1970s.

One way of looking at generative AI is that (oversimplifying yet again) it’s just like expert systems except we’re trying to make machines the experts. In traditional analytics, data quality is something you take care of so you can draw reliable conclusions when you analyze the data with programs you’ve subjected to software quality assurance.

Data quality isn’t what it once was. Now, it’s what you need so that the data whose quality you’re assuring properly trains your generative AI.

In generative AI, that is, the data aren’t something you process with programmed logic. In a very real sense, the data are the program logic.

Bob’s last word: One more thing. The Gemini team produced its problematic results despite having Google’s resources to draw on. But AI vendors are starting to peddle the benefits of connecting your company’s internal data to the same AI technologies. It’s tempting, but if Google, with far deeper pockets than its customers have, couldn’t figure out the DQA practices it needed to stay out of trouble, how are its customers supposed to do so?

And while we’re on the subject, this week CIO.com’s CIO Survival Guide is:A CIO primer on addressing perceived AI risks.” It’s about real and perceived AI risks you probably haven’t read about anyplace else.

Sometimes, we’re too clever for our own good. A couple of recent trips highlighted several more significant issues to discuss around integrations and patterns. It might be that Tech leaders really are clever—but clever isn’t always the same as smart.

Take for example, times we overcomplicate choices. Continuing our conversation with the “Business”, it might make sense to set some expectations about integrations.

Have you considered what the additional functions will do to your infrastructure sizing and expected performance? Make sure your integration requirements include the often-neglected “Non-Functional Requirements”—we need to be able to back up and recover the system, keep users happy about performance, and meet our other reliability and scalability goals. The last thing you want is a system that doesn’t perform as well as its individual components. Remember- you want the Business to be your ally, and they leave it to you to understand what an NFR is.

You need to perform a bare metal recovery test, especially of any integrations. We inherited a pet duck from our neighbors, named “Ducky”. She swims around in her kiddie pool in the back yard, under the shade of a very large Tangelo tree. She is the happiest being I think I have ever met.

Do you know why? She doesn’t have to worry about the steps needed to recover a system with multiple integrations, that may or may not be recoverable in a logical order.

That’s why.

Think about an e-commerce system connected to an ERP for a moment. If the ERP has a brief system failure, does the E-commerce system understand that it needs to resend just the orders that were missed for the outage?  (Hint—good systems do this pretty well, but you should still check it. Because when it comes to data integrity, pretty well isn’t good enough.)

Is the juice worth the squeeze?  Good integrations are expensive. Your awesome technical teammates are going to work through edge cases, error handling through multiple platforms, message timing, and multiple development languages. Can your business teammates make the business case how this effort is going to deliver Some-X ROI or greater?  Will your business teammates defend the investment needed for quality integrations, especially as compared to the consequences of failing to make it?

Point to point vs integration platform (such as an Enterprise Service Bus). At the moment, here’s what I recommend to keep the decision relatively straightforward: If you must connect only two or three systems, Point to Point integrations make sense. If you are being asked to connect more than that, however, the number of integrations grows polynomially, and maintenance and technical debt will become an issue.

And so, following Bob’s rule of making the architecture cleaner than when you found it, an integration platform (such as an Enterprise Services Bus) makes a lot of sense. It is a big investment, however. Make sure that your colleagues understand what they are getting into.

One of your colleagues might have been reading about how “Microservices” are a thing, and that it makes all integration easy. Just like EAI platforms made integration easy, ESBs make it easy, and, if you like cloud stuff, IaaS makes it easy. That is, it doesn’t. Nothing makes integration easy.

Microservices architecture is just another way for you to own all of the architecture. Be prepared to manage the costs and staff levels to maintain it. Microservices does give you more flexibility—and you need the wisdom to use it. This is a controversial position I am taking, and I expect to hear disagreements. But, when you ride the microservices tiger, you may not get to choose when you dismount. You now own all of the choices in how the target systems work together. Be sure your team is ready for the adventure.

Real Time “Extract- Transform-Load” between systems is still an integration. Taking data from one system to another, even if we pretend we are not doing an “Integration”, means that we are doing an integration. The same points come up on cleanliness, architecture, sizing, currency, and so on. As my grandmother used to say “You are not fooling anybody!” The vendors that push this are using semantics to get around nervous stakeholders, not really inventing anything new. Let’s chalk this up to being part of the same discussion about investing in an integration platform.

Internet research on Integration isn’t necessarily real research. And some business executives think it is. I am continually bewildered that some people are taking medical, career or relationship advice from Tik Tok-ers. While I am sure that it is meant with the best of intentions, it is hard to believe that someone who doesn’t know us, or our situation, and has never had to do the hard work of making it work could give more than a handwaving opinion that would have relevance.

And, yet, some of our friends in the Business, doing a bit of research on the internet, will come up with “out-of-the-box” solutions that may or may not introduce a whole host of other problems. We do owe it to them to investigate their solutions. AND, we should ask that everybody keep an open mind on the solutions, taking into account security, scalability, and overall architecture.

Pro tip: Figure out a way to reframe the solution that will work (that is, your preferred solution) so it resembles what your business colleagues have discovered. This will make achieving consensus much easier.

When it comes to integration, following the long held KJR rule of “Keep it Simple”, turns out to be really complicated. My question for you—How do you make it look simple in your conversations without committing to something that won’t work in practice?