This coming Thursday I’m participating on a panel at the Offshore Outsourcing Conference in Las Vegas. The panel’s subject is “Offshore outsourcing backlash.” If you can’t make the event, I’ll give you a preview of what I’m going to say: “Of course there’s backlash. Just what else, exactly, did you expect?”

Perhaps because of this panel, outsourcing, and more specifically the notoriously high failure rate of outsourcing arrangements, is on my mind. So is the too-seldom-remembered dictum that to optimize the whole you have to suboptimize the parts.

There’s a connection, and it lies in one of those well-hidden assumptions that only reveals itself when you’re looking from exactly the right angle.

Why do companies outsource? It’s often for the wrong reason — because a particular business function isn’t “core” to the business, which is to say the function doesn’t create competitive advantage. There’s little evidence to support this theory, and quite a few reasons to be skeptical, as has been mentioned in this space previously).

Here’s another reason: The logic for outsourcing non-core competencies emphasizes the likelihood that the outsourcing vendor will be better at the function than you will.

Ignore for a moment that this will only sometimes be true, and less often as the company doing the outsourcing increases in size and scale. Pretend it’s a universal truth. Let’s think for a moment about what “better” means.

When a company outsources a function, it has to define the responsibilities of the outsourcer contractually. This means defining specific responsibilities, and service levels for those responsibilities have to be negotiated. How are you going to do that in a way that’s fair?

Most companies do so by insisting on industry best practices and applying industry benchmarks. Industry best practices are generally defined in the context of running the business function in question as a separate, efficient business. As we saw last week, benchmarks are one-size-fits-all measures predicated on the assumption that your goal is to optimize this business function as a separate entity. It took awhile, but we’ve arrived: Outsourcing is predicated on the hidden assumption that you want to optimize the particular part being outsourced.

But to optimize the whole, you often have to suboptimize the parts. How are you going to write that requirement into an outsourcing contract?

Let’s imagine you do. After all, you can certainly treat any business function as a black box and start the outsourcing process by characterizing its inputs and outputs, required resources and constraints. All you need is a formal process model that establishes in quantitative fashion exactly how the parts fit together to make the enterprise function. Every enterprise has one of these, doesn’t it?

Okay, let’s imagine yours does, or at least close enough so you can define in realistic terms what the outsourcer is supposed to do for you and at what cost. It should work, shouldn’t it?

Yes, it should. For awhile. Many of the big, high-profile outsourcing arrangements are ten year contracts. Last I looked, there’s little in the world of business anyone expects to last longer than three. So what should we expect to happen in year four of an average ten-year outsource?

I’d expect it to be time to renegotiate, because what you need is likely to have changed, in numerous, subtle, hard-to define ways.

You have to suboptimize the parts to optimize the whole. Most outsourcing arrangements violate this premise, and they do so with the best of intentions: To perform a particular piece of work as well as possible.

Oddly, sometimes doing it worse is better.

According to the Theory of Else, IT’s job is done when the software runs. It’s up to others in the business to make sure useful business results follow.

As last week’s column explained, the Theory of Else is a good way to make sure IT gets the blame when projects fail to deliver their expected value. The alternative, the Theory of Everything Else, is what will see you through to success: IT has to take on whatever tasks nobody else in the company is willing to do — not because it’s IT’s responsibility, not because it’s “the right thing to do,” but for the simple and unavoidable reason that your other alternatives are worse.

One of the most frequent items in the everything else list is the redesign of one or more business processes. And while it might seem counterintuitive, IT is a logical home for the discipline of business process redesign, for two very different reasons.

The first is that since IT is involved in every business change — something that isn’t true for any individual business division — placing the discipline inside IT means its practitioners will have the opportunity to practice their craft more than once, letting them deepen and perfect their skills (the same, by the way, is true of project management).

That’s the organizational logic. There’s also the knowledge of the process design discipline that already exists inside IT. No, I’m not talking about your business analysts, although some of them might have built some abilities along these lines. It’s your network engineers who really understand the subject, because everything you ever needed to know about business process design is already built into TCP/IP.

What TCP/IP does is break a message into chunks, send them into an intrinsically unreliable network built out of a collection of store-and-forward queues, to a defined destination where they’re reassembled into the original message. Ignore the break-a-message-into-chunks bit and what’s left?

A pretty good representation of most business processes, that’s what. Business processes consist of work queues. A piece of work is pushed onto a queue where it waits its turn. When its turn arrives, the queue manager does whatever is required, then forwards its completed to the next queue, where it’s worked on in its turn.

That isn’t very different from the routers that make up a TCP/IP network, although the routers don’t do much to each packet other than check it for errors and then either forward or discard it.

Any given link or router can lose a packet, which is to say, the network is intrinsically unreliable. That’s the IP part. The responsibility of the TCP part is to detect when that happens and correct the problem.

In a work process, work can get lost moving from queue to queue, too. A well-designed business process includes error checks to make sure work doesn’t get lost, assuming that from time to time something will go wrong along the way. Same thing.

Network engineers understand they have to deal with two different measures of speed: Latency and bandwidth, which is to say the end-to-end transmission delay for a single packet, and the total amount of information the network can handle in a given period of time.

Business process designers need to consider both cycle time and throughput — same ideas, different words.

To improve network latency and end-to-end bandwidth, network engineers find and speed up bottlenecks. Speeding up a part of the network that isn’t a bottleneck achieves nothing — installing faster routers is pointless if the ones you have already outpace the speed of the connection that links them. Business process designers also need to look for process bottlenecks instead of just speeding up whatever process step is most easily accelerated.

Network engineers are accustomed to the concept of a management console — a display that lets them know, at a glance, the health of the network. Manageability is built into the design of every router to make network management possible.

How many business process designers have the same understanding? Yes, it’s easier to manage a router than an employee. It still isn’t all that hard to build manageability into the queues that make up a business process, and construct a dashboard so the process manager can tell, at a glance, the health of the process.

Can a network engineer really succeed as a business process engineer?

Beats me. It sure would be fun to facilitate the design sessions, though, don’t you think?