We Must Regulate A.I. Here’s How,” writes Lina Khan, chair of the Federal Trade Commission, in the 5/3/2023 edition of the New York Times.

Ms. Khan is not stupid, and she makes a compelling case that unregulated generative AI might result in many deleterious outcomes. Regrettably, she misses the mark in two key aspects of the situation.

The first is easy to spot: That unregulated AI might be problematic doesn’t mean regulated AI will not be problematic.

The second and more consequential: Defining generative AI as a category will prove challenging at best.

The reason? Generative AI technologies are already sliding down the slippery evolutionary slope that many earlier technologies have traversed, from application-layer solutions to platform-layer building blocks.

If the point isn’t clear, consider SharePoint. It started out as an application – a document management system. As Microsoft steadily added capabilities to it SharePoint morphed, from a DMS into a general-purpose application development environment.

Imagine some of SharePoint’s capabilities are starting to look alarming in some way or other.

No, not annoying. Alarming. Enough so that various pundits called for its regulation.

Would that mean every application programmed using SharePoint as, say, its DBMS should be … heck, could be … subject to regulation?

Well, SharePoint-as-Platform could, in theory, be regulated as a thing. That might last for a short while, but only until Microsoft disaggregated SharePoint as a platform, breaking it up into a collection of operating system services, much as happened with browser capabilities decades ago.

We can expect the same with generative AI. Its capabilities, from researcher-and-essay-writer to deep-fake-creator, will, we can predict with confidence, become embedded as platform-layer technologies in large-scale application suites, where their regulation will be no more possible than regulating any other embedded IT platform technology.

Put differently, generative AI will be built into general-purpose business applications. It’s easy to envision, for example, generative-AI-enabled ERP, CRM, and HRIS suites. Try to imagine distilling and regulating just the generative-AI capabilities that will be built into these already-familiar application categories.

The threat(s)

I asked ChatGPT to list the five most important generative AI threats. It answered with five versions of the same threat, namely, misinformation and disinformation, whether in the form of deepfakes, counterfeits, or other incursions into what’s real.

The threats, from where IT sits

Just my opinion here (not ChatGPT’s opinion): The single most obvious threat from generative AI is to information security. Deepfakes will vastly increase an organization’s vulnerability to the various forms of phishing attack, with all their well-known data-theft and ransomware consequences.

Generative AI will also create whole new categories of business sabotage. Imagine the damage an unscrupulous competitor could do to your company’s image and brands using even the current generation of deepfake creation software. If this doesn’t look to you like an IT problem, it’s time to re-think what you consider IT’s role in the business to be.

A popular framework for formulating business strategy, re-framed for the KJR perspective, is TOWS, which stands for Threats, Opportunities, Weaknesses, Strengths. As has been pointed out here from time to time, a capability is an opportunity when your business achieves it and a threat when a competitor does. And many of today’s business threats and opportunities come from new forms of information technology.

So it isn’t good enough for IT to implement and manage business applications and their underlying platforms and declare the business mis-uses of generative AI as Someone Else’s Problem. IT’s has strategic roles to play, including the identification of IT-driven threats and opportunities.

If not regulation, then what?

Getting back to how we as businesses and as society as a whole should be dealing with the threats and opportunities posed by generative AI, regulation isn’t going to do us much good.

What will? First, foremost, and most obviously we can expect the purveyors of anti-malware products to build machine-learning technology into their products, to identify generative-AI-based phishing and other penetration attempts.

Second, we can expect the purveyors of marketing automation systems to build machine-learning-based content scanning capabilities into their products, to help you spot deepfake and other brand-damaging content so your marketing department is equipped to deal with this category of IT-driven business threat.

Bob’s last word: More generally, there are problems in this world for which centralized governance provides the best … and sometimes only … solutions. There are others for which a marketplace model works better.

When it comes to generative AI, and this is just my opinion, mind you, the marketplace approach will prove to be quite unsatisfactory.

But far more satisfactory than any regulatory alternative.

Sometimes, in the wise words of Argo’s Tony Mendez, we have to go with the best bad plan we have.

Bob’s sales pitch: Every week in Keep the Joint Running I try to provide perspectives subscribers will find useful, readable, and unconventional – there’s no point in being repetitious, and even less point in being boring.

You’re the only form of promotion I use, so if you find KJR’s weekly points of view valuable, it’s up to you to spread the good word.

This week on CIO.com’s CIO Survival Guide: “7 venial sins of IT management.” They aren’t the worst things a CIO can do, but they certainly aren’t good ideas.

As someone wiser than me pointed out, every organization is perfectly designed to get the results it gets.

As someone exactly as wise as I am (that is to say, me) has been known to point out, change happens when someone in a position to do something about a situation has concluded that how their organization does things isn’t good enough.

If you’re that person, do a bit of Googling (or, I suppose, Bing-ing) and you’ll find lots of alternatives for designing an organizational change, including such disciplines as Lean, Six Sigma, Lean Six Sigma, Process Re-engineering, and the Theory of Constraints.

Assuming you choose a change discipline that fits what you’re trying to accomplish, each of these can deliver a change design that can work.

Do a bit more Googling or Bing-ing and you’ll find a complementary change discipline called OCM – Organizational Change Management – whose purpose is to discover and mitigate barriers to organizational change. It’s essential if you want your intended change to become an accomplished change.

Try to make the change happen, though, and you might discover there’s something in the plan that’s either too ambitious, or not ambitious enough.

If it’s too ambitious you’ll find the first chunk of organizational change is too complicated by half – what’s often described as changing the plane’s engine while you’re still in flight.

Or, worse, you’re trying to convert your biplane into a single-wing aircraft without first landing.

When your chosen starting point is at the opposite end of the continuum – when it isn’t ambitious enough – it goes by the orchardarian monicker “low hanging fruit.”

Going after low-hanging fruit is a popular consulting recommendation. It’s usually a mistake because it creates the illusion of forward progress while failing to set the stage for additional forward progress. Extending the metaphor, go after low-hanging fruit and you’ll find you’re clutching a lemon in your left fist and a tree branch in your right, all while you’re trying to avoid falling off your ladder.

Or, because metaphors don’t (speaking of metaphors) build a very good foundation for a logical edifice, let’s make it real: achieve a quick win and you’re left without a plan for what happens next.

Quick wins deliver the illusion of progress, but with no momentum or trajectory.

The missing piece

Quick win proponents get one thing right – that the hardest part of most intended changes is getting started. What they fail to recognize is that staying started is harder than getting started.

We might call what’s needed a “Quick Win Plus.” Like a quick win, a quick win plus gets the change started by making a small, manageable, clearly envisionable change.

Unlike a quick win, the change a quick win plus accomplishes is one that deliberately includes ripple effects – dependencies that encourage additional changes elsewhere in the organization. Especially, they’ll encourage creation or improvement of a few competencies critical to ongoing success – that will, that is, encourage additional beneficial changes.

Some changes don’t fit this mold – they just can’t, for one reason or another, be decomposed into a swarm of small, independent alterations in how work gets done. These big, complicated changes are the ones that call for disciplined, experienced project management and diversion of staff from their day-to-day responsibilities to full or nearly full commitment to the project team.

Bob’s last word: The way the business world is evolving, big, complicated organizational change is becoming decreasingly feasible. Battle-tested project managers have always been in short supply, while the staffing levels needed for traditional project-managed change are higher than most businesses are able to sustain.

Which is why so many organizations are gravitating to agile-oriented, iterative and incremental change methods.

The quick-change-plus approach fits this thought process well.

Bob’s sales pitch: I can only wish I’d had anything to do with Good Night Oppy. It’s the story of the Spirit and Opportunity Mars rovers. You must watch it – then you’ll wish you’d been a part of it too.

It’s simply wonderful – a very human story, brilliantly told. And after you watch it I can pretty much guarantee you’ll be telling your friends that they must watch it too.

Now on CIO.com’s CIO Survival Guide: Why IT surveys can’t be trusted for strategic decisions.” It’s an accurate title.