Debates are pointless.

We’ve been told, you and I, that debating is the best way to understand an issue, because you get to hear both sides in an informed sort of way.

Except … at the end of a formal debate, what do the judges do (or, at the end of a presidential or vice-presidential debate, pundits (or polls) do?

They decide who won. Not which position was right. Quite the opposite — the entire premise of the debate format is that both positions are always equally right. All debates do is determine who is the better arguer. It’s intellectual relativism at its finest.

At the end of the presidential and vice-presidential debates, what have we learned? Nothing more about the issues or which of the two debaters would be better in office, unless the office they’re running for is Debater In Chief.

Welcome to my once-every-four years pre-election diatribe, built on the thin pretext that this relates to the business challenge of how you decide who to hire and retain. As with past diatribes I won’t suggest who you should vote for, just how each of us might go about deciding who to vote for.

Thought #1: Be happy. Yes, I think one of the two presidential tickets would be better for this country than the other. But. Whichever ticket is elected, we’ll end up with very smart, very qualified individuals as both president and vice president.

Like him or not, Barack Obama is very smart, and has demonstrated that he reviews evidence and listens to smart, well-informed individuals before making his decisions.

And, like him or not, Mitt Romney has demonstrated throughout his career that whatever else he might be, he’s also a very smart guy who knows how to listen, learn, and get things done.

Also, unlike many past elections, where one or both of the vice-presidential candidates were bad jokes, Joe Biden, for all his gaffes, is intensely knowledgeable, especially about foreign policy; while Paul Ryan is better known for his wonkiness than his charisma.

We have four highly qualified candidates. We should always be so lucky.

Thought #2: From a policy perspective, the election matters little. Whoever is elected, so long as either party holds more than 40% of the seats in the Senate it can block just about everything related to implementing presidential policy. Never mind which party started it. No matter who is elected president, and whichever party has a majority in the House and Senate, we can expect this dynamic to continue.

The only cure I see for this is instant run-off (aka Ranked Choice Voting). Here’s why: Instant run-off allows citizens to vote for the candidate they think is best qualified rather than the major-party candidate they think is best qualified.

In case this point isn’t clear: Voting for the third-party candidate you like best counts as a half vote against whichever major party candidate you’d prefer if your preferred third-party candidate loses.

So instant run-off is what’s needed for third-party candidates to get elected. We’d only need a few of them in the House and Senate for the third party to be the tie-breaker for every vote (and cloture vote). A viable third party might actually break the gridlock.

And no, gridlock isn’t desirable. Change is constant. We need to adapt, our government just as much as businesses and ourselves. Gridlock prevents that.

Thought #3: This time, perhaps party affiliation should be a deciding factor. I’ve never taken this position in the past. I hope I never take it in the future. But given the gridlock issue, and given that the two major parties have behaved very differently in the recent past … not that one or the other is better or worse, but that they have very different flaws … it might make sense to evaluate the candidates based on which of the two parties they’ve decided to lead and be constrained by, and why you think they made that choice.

That, in fact, is how I made my decision this time. I find one of the two parties to be far more consistently detestable than the other, far more than I feel strongly about either presidential candidate. And as the winner’s party, by directing the executive branch, gets much more power, that in itself is a major issue this year.

Thought #4: Vote. Remember, please, that as citizens we aren’t government’s customers, nor are we disinterested spectators. We’re our government’s owners, and as owners we’re responsible for it. Vote.

Enough. Thanks for indulging me. Next week it’s back to business.

The next big trend in information technology is client/server computing, only nobody seems to admit it.

History first:

In the early-1990s, client/server was the Next Big Thing in IT. In its earliest form it partitioned applications into database management — the server part — and everything else, which was the client part.

It worked pretty well, too, except for a few glitches, like:

  • Desktop operating systems weren’t ready: These were the days of DOS-based Windows. NT was just emerging as the next generation, with OS/2 alongside it as IT’s we-sure-wish-it-had-a-chance alternative. Client/server computing meant PCs couldn’t just be platforms for enhancing the effectiveness of workgroups and individual employees anymore. They had to be production-grade platforms.
  • Microsoft didn’t respect its DLLs: The phrase was “DLL hell.” What it meant was that Microsoft issued patches that changed the behavior of DLLs in ways that broke applications that relied on them.

Including client/server applications … a headache IT professionals found seriously annoying, and for good reason.

  • Servers proliferated: Client/server partitioned database management from everything else. Soon, IT theoreticians figured out the benefits of further partitioning. The client part of client/server became the presentation layer; the integration logic partition spawned the whole Enterprise Application Integration marketplace; and moving work from one place to another led to workflow systems and then “business process management” (a new name for the same old thing — neither the first nor last time that’s happened in IT).

What was left were the various algorithms and business case handling that constitute core business logic, which either ran on what we ended up calling “app servers” or as stored procedures in the database.

Which in turn meant your average business application needed three or four separate servers plus the desktop. Client/server started out as a simpler alternative to mainframe computing, but it became darned complicated pretty quickly.

As IT’s acceptance of the PC had never been more than grudging, a standard narrative quickly permeated the discussion: The problem with client/server was the need to deploy software to the desktop.

It was the dreaded fat client, and, fat being a bad thing, the UI was moved to the browser, while presentation logic moved to yet another server. The world was safe for IT, if clunky for computer users, who had become accustomed to richly functional, snappily performing “fat” interfaces.

To help them out, browsers became “richer,” the exact same thing except that (1) “rich” is good while “fat” is bad; and (2) nobody had to admit they’d been wrong about anything along the way.

So where are we now? Desktop operating systems are more than robust enough to support production-grade software, Microsoft now respects its DLLs, and we have excellent tools for pushing software to PCs. The original rationale for browser-based computing is pretty much a historical curiosity.

A new rationale arose to take its place, though: Browser-based apps let us develop once and run anywhere. It was a lovely theory, still espoused everywhere except those places that actually deploy such things. Those who have to develop browser-based apps know just how interesting software quality assurance becomes when it requires a lab that’s chock full o’browsers … the bare minimum is three versions each of Internet Explorer, Firefox, Chrome, and Safari, each running on at least three versions of every operating system they’re written for, tested on at least three different screen resolutions.

And now we have tablets, just in time to save the day, because on tablets, browser-based interfaces are rapidly being supplanted by (drum-roll, please) … that’s right, client/server apps.

Oh, that isn’t what they’re called. But in, for example, Apple’s App Store, you’ll find plenty of companies that generate content that’s consumed over the Internet, engage in eCommerce, or both, that are providing free iPad apps that provide a slicker user interface to the same functionality as their websites.

That’s right: The presentation logic is deployed to the iPad or Android tablet as an App; the rest executes on corporate servers. Sounds like n-tier client/server to me.

If you aren’t already deploying custom tablet apps as rich, tailored front-ends to your existing Web-available functionality, you probably have such things on the drawing board. And once you’re back in this business, you might as well move away from browser-based deployment to custom desktop/laptop front-ends as well.

Is it more work? Yes, it is. So here’s a research project that’s tailor-made for someone’s graduate research thesis: Compare how long it takes employees to perform a series of standard tasks on browser-based user interfaces with the time needed using customized clients. My unencumbered-by-any-facts guess is that the custom clients would win, and win by a big enough margin to cover the spread.

Call it what you like, it’s client/server reborn.