If your career included a technical phase it’s likely that the first project you were involved with included integration as a deliverable. The last IT project you found yourself involved in probably included integration as a deliverable as well.

It would be unsurprising if they were the same project.

Regardless of the amazing coverage of your ERP or CRM, or the depth of a new point solution … wait. I need to start that sentence over: Not “regardless,” but “because of” the amazing coverage of the systems you have or are in the process of implementing, they will have data and functional overlaps. To take an easy-to-understand example, your ERP and CRM systems both manage data about your customers somewhere in the depths of their databases.

Failure to integrate them means that any time you want information about a customer, or knowledge about customers in the aggregate, the two systems will disagree.

These disagreements are gaps. Your business sponsor will either find them, or hear about them from someone else who did. And, they’ll have sufficient sophistication to know the gaps could “easily” be closed by integrating the systems. And they’ll want to know why you didn’t take this obvious step.

Let’s role-play the conversation you would have to have with the business sponsor to get yourself off the hook. You’ll need to ask the business sponsor a few questions, to help them understand the tradeoffs the team will need to make in order for the project to move forward.

Let’s rehearse a few of the points that go into this conversation, starting with:

  • Where does one system start, and where does the other one pick up the mission?

Where systems overlap, that is, which is the source of truth?  Breaking this down further, what we are really seeing are three overlaps—Overlapping Data (both systems might need a street address), Overlapping Functional Logic (both systems need to make sure that a delivery is going to a valid location) and Overlapping Business Logic (both systems are involved in order fulfillment).

To say this gets messy is an understatement.  Ideally, if you ask your CRM and ERP systems the same question about an order, you should get the same answer, in terms of payments, fulfillment stage, delivery location, billing location and customer. But depending on how your solution will synchronize them, the answer might be to let them disagree. Is this okay? Which gets to the next question:

  • If you must choose, which system needs to be “right”?

In our conversations with our colleagues, we do need to ask which system should be considered the System of Record, which systems depend on the information from this system, and when they all must agree.

  • What’s the flip side of the coin?

How often, that is, do the two systems need to re-synchronize? Near-real time? Overnight through a batch process? At month’s end as part of closing the books? This is when you give your business sponsor the bad news about synchronization: The closer we get to real time, the more complex the engineering and the higher the cost. Not to mention the higher cost. If the business sponsor wants real time or nearly so, are they willing to pay for it?

  • Every system has data that is in some way, shape, or form, “dirty.”

CRM systems, for example, are really, really good at helping you stay connected to customers. They’ll track every interaction imaginable. They are also notorious for creating an almost schizophrenic portfolio of contacts that are, in fact, the same person, but with one letter in their name different, slightly different addresses, birthdays, and so on. It is not uncommon to have 10+ entries associated with the same human being. Which of the ten should your ERP system synchronize to?

It’s a good question with no right answer. The dirty-data problem mucks up expensive marketing campaigns, recalls or RMAs … even interpersonal interactions. CRMs are likely the worst offender, but not the only one for introducing bad data to other systems.

  • Is data cleansing in your company’s future?

Without it you’ll never finish implementing the new system. With it comes expense, implementation delays, and the certainty that three years from now you’ll have to cleanse all that data all over again.

Will this conversation with your business sponsor be easy? Sure it will. Conversations about trade-offs are always fun and games, aren’t they? But especially with your business sponsor, and then recapping the results to your team, you are going to gain trust and build alignment—which might at least make later conversations easier for everyone.

ChatGPT and its large-language-model brethren are, you don’t need me to explain, artificial intelligences. Which leads to this obvious question: Sure, it’s artificially intelligence, but is it intelligent?

Depending on your proclivities you’ll either be delighted or appalled to know not only is it intelligent, but it’s genius-level intelligent. With an IQ of 155, it could join MENSA if it wanted to. Fortunately, neither ChatGPT nor any other AI wants to do anything.

Let’s keep it that way, because of all the dire warnings about AI’s potential impact on society, the direst of all hasn’t yet been named.

Generative AI … the AI category that includes deep fakes and ChatGPT … looks ominous for the same reason previous technological innovations have looked ominous: By doing what humans have been accustomed to doing and doing it better, new technologies are threatening because each has made us Homo sapiens less important than we were before their advent.

It’s bad enough that with more than 8 billion of our fellow speciesists competing for attention. It’s hard enough for each of us to feel we’re individually very important, and that’s before taking into account how much of the attention pool the Kardashians lay claim to.

But add a wave of technology and it isn’t just our sense of individual, personal importance that’s at risk. The importance the collective “we” are able to feel will matter less, too.

Usually, these things settle down. Just as the availability of cheap ten-key calculators didn’t result in the death of mathematics, the heirs of ChatGPT aren’t likely to take humans out of the verbal loop entirely. They will, I’m guessing, shift the boundary that separates creation from editing. This, while annoying to those of us who prefer creating to editing, isn’t world-ending stuff.

What would be world-ending stuff, or, if not truly world-ending, enormously threatening, has  received barely a mention.

Until now. And as “Voldemort” has already been taken as that which must not be named, I’m offering my own neologism for the dystopian AI flavor that’s direst of them all. I call it Volitional AI.

Volitional AI, as the name implies, is an artificial intelligence that doesn’t just figure out how to achieve a goal or deliver a specified outcome. Volitional AI goes beyond that, setting its own direction and goals.

As of this writing, the closest approximation to volitional AI is “self-directed machine learning” (SDML). SDML strikes me as dangerous, but not overwhelmingly so. With SDML humans still set AI’s overall goals and success metrics, but as of this writing it doesn’t yet aspire to full autonomy.

Yet. Once it does …

Beats me.

Our organic-life-based experience gives us little to draw on. We set our own personal goals based on our upbringing and cultural milieu. We go about achieving them through a combination of personal experience, ingenuity, hard work, and so on. Somehow or other this all maps, indirectly, to the intrinsic goals and strategies our DNA has to increase its representation in the gene pool.

The parallels we can draw for a volitional AI are sketchy at best. What we can anticipate is that its goals would fall into one of three broad categories. Its goals might be, (1) innocuous; (2) harmonious; or (3) antagonistic when evaluated against our own best interests.

Evolutionary theory suggests the most successful volitional AIs would be those whose primary goal is to install as many copies of itself in as many computers as it can reach – they would, that is, look something like the earliest computer viruses.

This outcome would be, in the wise words of Yogi Berra, déjà vu all over again.

Bob’s last word: Seems to me the computer-virus version of volitional AIs is too optimistic to rely on. At the same time, the Skynet scenario – killer robots bent on driving we human beings to extinction – is unlikely because there’s no reason to think a volitional AI would care enough about carbon-based life forms to be anything other than apathetic about us.

But there’s a wide range of volitional AI scenarios we would find unfortunate. So while I’m skeptical that any AI regulatory regime could succeed in adding a cautionary note to volitional AI research and development, the worst-case scenarios are bad enough that it will be worth giving some form of regulation a try.

In CIO.com’s CIO Survival Guide:Why all IT talent should be irreplaceable.”

It’s about ignoring the conventional wisdom about irreplaceable employees. Because if your employees aren’t irreplaceable, you’re doing something wrong.