We Must Regulate A.I. Here’s How,” writes Lina Khan, chair of the Federal Trade Commission, in the 5/3/2023 edition of the New York Times.

Ms. Khan is not stupid, and she makes a compelling case that unregulated generative AI might result in many deleterious outcomes. Regrettably, she misses the mark in two key aspects of the situation.

The first is easy to spot: That unregulated AI might be problematic doesn’t mean regulated AI will not be problematic.

The second and more consequential: Defining generative AI as a category will prove challenging at best.

The reason? Generative AI technologies are already sliding down the slippery evolutionary slope that many earlier technologies have traversed, from application-layer solutions to platform-layer building blocks.

If the point isn’t clear, consider SharePoint. It started out as an application – a document management system. As Microsoft steadily added capabilities to it SharePoint morphed, from a DMS into a general-purpose application development environment.

Imagine some of SharePoint’s capabilities are starting to look alarming in some way or other.

No, not annoying. Alarming. Enough so that various pundits called for its regulation.

Would that mean every application programmed using SharePoint as, say, its DBMS should be … heck, could be … subject to regulation?

Well, SharePoint-as-Platform could, in theory, be regulated as a thing. That might last for a short while, but only until Microsoft disaggregated SharePoint as a platform, breaking it up into a collection of operating system services, much as happened with browser capabilities decades ago.

We can expect the same with generative AI. Its capabilities, from researcher-and-essay-writer to deep-fake-creator, will, we can predict with confidence, become embedded as platform-layer technologies in large-scale application suites, where their regulation will be no more possible than regulating any other embedded IT platform technology.

Put differently, generative AI will be built into general-purpose business applications. It’s easy to envision, for example, generative-AI-enabled ERP, CRM, and HRIS suites. Try to imagine distilling and regulating just the generative-AI capabilities that will be built into these already-familiar application categories.

The threat(s)

I asked ChatGPT to list the five most important generative AI threats. It answered with five versions of the same threat, namely, misinformation and disinformation, whether in the form of deepfakes, counterfeits, or other incursions into what’s real.

The threats, from where IT sits

Just my opinion here (not ChatGPT’s opinion): The single most obvious threat from generative AI is to information security. Deepfakes will vastly increase an organization’s vulnerability to the various forms of phishing attack, with all their well-known data-theft and ransomware consequences.

Generative AI will also create whole new categories of business sabotage. Imagine the damage an unscrupulous competitor could do to your company’s image and brands using even the current generation of deepfake creation software. If this doesn’t look to you like an IT problem, it’s time to re-think what you consider IT’s role in the business to be.

A popular framework for formulating business strategy, re-framed for the KJR perspective, is TOWS, which stands for Threats, Opportunities, Weaknesses, Strengths. As has been pointed out here from time to time, a capability is an opportunity when your business achieves it and a threat when a competitor does. And many of today’s business threats and opportunities come from new forms of information technology.

So it isn’t good enough for IT to implement and manage business applications and their underlying platforms and declare the business mis-uses of generative AI as Someone Else’s Problem. IT’s has strategic roles to play, including the identification of IT-driven threats and opportunities.

If not regulation, then what?

Getting back to how we as businesses and as society as a whole should be dealing with the threats and opportunities posed by generative AI, regulation isn’t going to do us much good.

What will? First, foremost, and most obviously we can expect the purveyors of anti-malware products to build machine-learning technology into their products, to identify generative-AI-based phishing and other penetration attempts.

Second, we can expect the purveyors of marketing automation systems to build machine-learning-based content scanning capabilities into their products, to help you spot deepfake and other brand-damaging content so your marketing department is equipped to deal with this category of IT-driven business threat.

Bob’s last word: More generally, there are problems in this world for which centralized governance provides the best … and sometimes only … solutions. There are others for which a marketplace model works better.

When it comes to generative AI, and this is just my opinion, mind you, the marketplace approach will prove to be quite unsatisfactory.

But far more satisfactory than any regulatory alternative.

Sometimes, in the wise words of Argo’s Tony Mendez, we have to go with the best bad plan we have.

Bob’s sales pitch: Every week in Keep the Joint Running I try to provide perspectives subscribers will find useful, readable, and unconventional – there’s no point in being repetitious, and even less point in being boring.

You’re the only form of promotion I use, so if you find KJR’s weekly points of view valuable, it’s up to you to spread the good word.

This week on CIO.com’s CIO Survival Guide: “7 venial sins of IT management.” They aren’t the worst things a CIO can do, but they certainly aren’t good ideas.

Just in case you haven’t heard, the hot new buzzword (okay, buzz-phrase) is “generative AI.” It’s defined as “… algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos.”

By now we’ve all been inundated with alarming accounts of generative AI’s societal consequences (including this, from me). And there’s a lot to be alarmed about, from the erosion of reality I wrote about in the referenced KJR to the potential obsolescence of Homo sapiens. To which, let me add this thought, in the interest of being first to ring the bell: What’s the big deal? Thirty or more years ago, pedagogues worried that the advent of cheap 10-key calculators would be the end of mathematics education as we know it. Now, there no longer seems to be much of a point to memorizing the multiplication tables.

The question: In the future, will we look back and wonder what all the fuss was about? Because really, isn’t generative AI’s impact just like that of the 10-key calculator but for verbal skills instead of arithmetic?

As a writer, consultant, and IT industry wiseguy I take such things personally. So to discover whether I’m already obsolete or not, I decided to put generative AI – OpenAI version – through its KJR paces by askingsome questions thatare central to KJR’s mission, starting with this: What are the most important factors in making IT organizations effective?

ChatGPT responded with a 600-word essay. Its core points:

  • Leadership and Management
  • Skilled Workforce
  • Clear Goals and Objectives
  • Effective Communication and Collaboration
  • Agile and Adaptable Mindset
  • Robust IT Governance
  • Customer Focus
  • Efficient Resource Management

It followed each of these headlines with a concise, explanatory paragraph.

For contrast, here’s the framework I’ve relied on for just about forever – Effective IT masters:

  • Business integration
  • Process maturity
  • Technical architecture
  • Human performance

My first reaction to ChatGPT’s answer was dismay. ChatGPT’s essay was, I had to concede, quite good.

But then I compared the two more closely. I’m delighted to report that every element of the ChatGPT framework fits into the KJR framework. Leadership and Management, and a skilled workforce, for example, are two sub-factors among a half-dozen that are subsumed by Human Performance.

The reverse isn’t true – the ChatGPT framework ignores, for example, the impact of a strong technical architecture, not to mention the critical role process maturity plays in the effectiveness of not only IT, but all organizations.

Ready for another? I asked, “What are the eight tasks of leadership?”

The answer (again, just the headlines):

  1. Setting Vision and Direction.
  2. Communicating and Influencing.
  3. Building and Managing Teams
  4. Making Decisions
  5. Planning and Organizing
  6. Managing Change
  7. Developing Others
  8. Ensuring Accountability and Results

KJR’s eight tasks:

  1. Setting direction
  2. Delegating
  3. Staffing
  4. Decision-making
  5. Motivation
  6. Managing team dynamics
  7. Engineering culture
  8. Communicating

Again, ChatGPT’s response was impressive – well-written, concise, and plausible. But digging in, it misses quite a bit. For example, the closest it gets to delegating is “developing others,” but that inverts purpose and fringe benefit.

As for communicating – arguably the single most important of the eight leadership tasks, ChatGPT entirely ignored listening and facilitating.

So I decided to give ChatGPT one more chance: “Is ‘internal customer’ a good idea?”

Its answer was, I have to admit, quite good – how the winning applicant for an open CIO slot might have responded. Again, summarizing:

Positive aspects of internal customer concept:

  • Encourages collaboration
  • Enhances service mindset
  • Emphasizes employee satisfaction

Limitations of internal customer concept:

  • May promote silos
  • Can blur roles and responsibilities
  • May not reflect organizational dynamics

Not bad. Personally, I think it’s too kind, but then I’m heavily invested in the end of the internal customer concept, so I can’t claim to be unbiased.

Bob’s last word: So I asked ChatGPT one more question: Should CIOs consider Bob Lewis a trusted advisor?

Its answer: “As an AI language model, I cannot make specific recommendations about individuals or their trustworthiness. Whether a Chief Information Officer (CIO) should consider Bob Lewis, or any other individual, as a trusted advisor would depend on various factors, including the CIO’s specific needs, goals, and context, as well as Bob Lewis’s qualifications, experience, and reputation.”

“Bob Lewis is a well-known author and consultant in the field of IT management and organizational effectiveness. He has written books on IT leadership and management, and has provided consulting services to organizations on IT strategy, governance, and operations. His insights and perspectives may be valuable for some CIOs seeking guidance in these areas.”

I’ll give ChatGPT points for diplomacy, and I guess I should be pleased that I appear on its radar screen at all.

Bob’s sales pitch: Please don’t think I’m encouraging you to shift your loyalties from KJR to ChatGPT. The latter’s responses are certainly plausible, but that isn’t the same thing as standing up to scrutiny (I nearly said “close scrutiny,” but if it’s scrutiny it’s close by definition).