This week’s KJR Challenge: Read this Microsoft word salad: “Introducing Microsoft 365 Copilot – your copilot for work – The Official Microsoft Blog” and figure out what Microsoft 365 Copilot is. Or, failing that, figure out what it does.

The linked blog entry was attributed to Jared Spataro, Microsoft’s Corporate Vice President, Modern Work & Business Applications.

Which leads to your next KJR Challenge: What on earth does that job title mean?

Meaning no offense, Mr. Spataro, but the only reason I have any confidence that you’re a Live Human Being and not a ChatGPT avatar is that I can usually make heads and tails out of a ChatGPT essay.

That, and that your average ChatGPT essay doesn’t include so many questionable assertions. Examples:

“Humans are hard-wired to dream, to create, to innovate.”

No, we aren’t. To the extent we’re hard-wired to do anything it’s to increase our DNA’s representation in the future population’s gene pool. And even that hard-wired drive is buffered by a bunch of intermediate effects.

“With Copilot, you’re always in control. You decide what to keep, modify or discard. Now, you can be more creative in Word, more analytical in Excel, more expressive in PowerPoint, more productive in Outlook and more collaborative in Teams.”

No. With Copilot we won’t be more creative in Word. With Copilot we mere humans will stop being creators. Copilot will turn us into editors instead.

I have nothing against editors. But editing isn’t creative and isn’t supposed to be creative.

Oh, and by the way, I might not be feeling collaborative; sometimes I don’t feel collaborative for intensely valid reasons. If Copilot were to make me more collaborative in Teams I most definitely wouldn’t be in control.

“With our new copilot for work, we’re giving people more agency and making technology more accessible through the most universal interface — natural language.”

Microsoft apparently buys into Springer’s Law, named after my old friend Paul Springer, who asked, “Why use a picture when a thousand words will do?”

Oh, and by the way, people misunderstand what’s said to them all the time. Why would we expect Copilot to be better at interpreting natural language than we human beings, who have had tens of thousands of years of practice at it.

Just my opinion: Clicking on an icon is faster and more efficient than using sentences to explain what you’re trying to do.

“… every meeting is a productive meeting with Copilot in Teams. It can summarize key discussion points — including who said what and where people are aligned and where they disagree — and suggest action items, all in real time during a meeting.

Okay, this is just silly. Or else, terrifying. Unless Copilot can barge in and mute everyone’s microphone to say, “You’ve made this point thirteen times already, Fred. Please stop so we can move on,” it won’t make meetings more productive.

Copilot “… creates a new knowledge model for every organization — harnessing the massive reservoir of data and insights that lies largely inaccessible and untapped today.”

The ever-helpful Bing implementation of ChatGPT explains that,” A knowledge model is a computer interpretable model of knowledge.” Yes, that’s right. A knowledge model is a model of knowledge. And that’s the best definition of “knowledge model” I could find.

One more: “Uplevel skills. Copilot makes you better at what you’re good at and lets you quickly master what you’ve yet to learn.”

Except that as it turns out, Copilot doesn’t “uplevel” [don’t blame me for this linguistic abomination] anyone’s skills. So far as I can tell it doesn’t show you how to do something. It does whatever-the-task-is for you.

But delegation is a skill, so I guess gaining the ability to delegate to Copilot constitutes “upleveling” your delegation skills.

But it’s a stretch.

Bob’s last word: Don’t get me wrong. A year ago I was impressed with Google’s semantic search capabilities. Now, more and more I’m complementing it with Bing’s generative AI research summarizations. Its abilities are impressive, and I expect Copilot and similar technologies will turn out to be highly consequential.

But as impressive as generative AI is, it also encourages me to be lazy.

For this I don’t need encouragement. And if we’re going to equate laziness and increased productivity … I think we’re going to need a new knowledge model to sell the idea.

Bob’s sales pitch: Every time I email a fresh column to the assembled KJR multitudes, my mailing service drops those subscribers whose emails are bounced due to mailbox full or other errors. The result is a slow but steady erosion of KJR’s subscriber base. The only way to replenish is for subscribers like you to encourage non-subscribers like that guy three cubicles to the left of you to sign up.

How about it?

Now on’s CIO Survival Guide:Why IT surveys can’t be trusted for strategic decisions.” All surveys will tell you is whose company you’re keeping.

We Must Regulate A.I. Here’s How,” writes Lina Khan, chair of the Federal Trade Commission, in the 5/3/2023 edition of the New York Times.

Ms. Khan is not stupid, and she makes a compelling case that unregulated generative AI might result in many deleterious outcomes. Regrettably, she misses the mark in two key aspects of the situation.

The first is easy to spot: That unregulated AI might be problematic doesn’t mean regulated AI will not be problematic.

The second and more consequential: Defining generative AI as a category will prove challenging at best.

The reason? Generative AI technologies are already sliding down the slippery evolutionary slope that many earlier technologies have traversed, from application-layer solutions to platform-layer building blocks.

If the point isn’t clear, consider SharePoint. It started out as an application – a document management system. As Microsoft steadily added capabilities to it SharePoint morphed, from a DMS into a general-purpose application development environment.

Imagine some of SharePoint’s capabilities are starting to look alarming in some way or other.

No, not annoying. Alarming. Enough so that various pundits called for its regulation.

Would that mean every application programmed using SharePoint as, say, its DBMS should be … heck, could be … subject to regulation?

Well, SharePoint-as-Platform could, in theory, be regulated as a thing. That might last for a short while, but only until Microsoft disaggregated SharePoint as a platform, breaking it up into a collection of operating system services, much as happened with browser capabilities decades ago.

We can expect the same with generative AI. Its capabilities, from researcher-and-essay-writer to deep-fake-creator, will, we can predict with confidence, become embedded as platform-layer technologies in large-scale application suites, where their regulation will be no more possible than regulating any other embedded IT platform technology.

Put differently, generative AI will be built into general-purpose business applications. It’s easy to envision, for example, generative-AI-enabled ERP, CRM, and HRIS suites. Try to imagine distilling and regulating just the generative-AI capabilities that will be built into these already-familiar application categories.

The threat(s)

I asked ChatGPT to list the five most important generative AI threats. It answered with five versions of the same threat, namely, misinformation and disinformation, whether in the form of deepfakes, counterfeits, or other incursions into what’s real.

The threats, from where IT sits

Just my opinion here (not ChatGPT’s opinion): The single most obvious threat from generative AI is to information security. Deepfakes will vastly increase an organization’s vulnerability to the various forms of phishing attack, with all their well-known data-theft and ransomware consequences.

Generative AI will also create whole new categories of business sabotage. Imagine the damage an unscrupulous competitor could do to your company’s image and brands using even the current generation of deepfake creation software. If this doesn’t look to you like an IT problem, it’s time to re-think what you consider IT’s role in the business to be.

A popular framework for formulating business strategy, re-framed for the KJR perspective, is TOWS, which stands for Threats, Opportunities, Weaknesses, Strengths. As has been pointed out here from time to time, a capability is an opportunity when your business achieves it and a threat when a competitor does. And many of today’s business threats and opportunities come from new forms of information technology.

So it isn’t good enough for IT to implement and manage business applications and their underlying platforms and declare the business mis-uses of generative AI as Someone Else’s Problem. IT’s has strategic roles to play, including the identification of IT-driven threats and opportunities.

If not regulation, then what?

Getting back to how we as businesses and as society as a whole should be dealing with the threats and opportunities posed by generative AI, regulation isn’t going to do us much good.

What will? First, foremost, and most obviously we can expect the purveyors of anti-malware products to build machine-learning technology into their products, to identify generative-AI-based phishing and other penetration attempts.

Second, we can expect the purveyors of marketing automation systems to build machine-learning-based content scanning capabilities into their products, to help you spot deepfake and other brand-damaging content so your marketing department is equipped to deal with this category of IT-driven business threat.

Bob’s last word: More generally, there are problems in this world for which centralized governance provides the best … and sometimes only … solutions. There are others for which a marketplace model works better.

When it comes to generative AI, and this is just my opinion, mind you, the marketplace approach will prove to be quite unsatisfactory.

But far more satisfactory than any regulatory alternative.

Sometimes, in the wise words of Argo’s Tony Mendez, we have to go with the best bad plan we have.

Bob’s sales pitch: Every week in Keep the Joint Running I try to provide perspectives subscribers will find useful, readable, and unconventional – there’s no point in being repetitious, and even less point in being boring.

You’re the only form of promotion I use, so if you find KJR’s weekly points of view valuable, it’s up to you to spread the good word.

This week on’s CIO Survival Guide: “7 venial sins of IT management.” They aren’t the worst things a CIO can do, but they certainly aren’t good ideas.