Including Keep the Joint Running and its predecessors – InfoWorld’s IS Survival Guide and Advice Line – I’ve been sharing thoughts and opinions for more than 26 years.
When I started, I wanted people to read what I had to say and think, “Wow!” Now, I’m gratified if they think, “A column a week for 26 years? Wow!”
Quantity seems to have gradually overtaken excellence as my most crucial KPI.
Meanwhile, I’m more likely to remember having written about a subject than you’re likely to remember having read about it for, I think, obvious reasons. As a result I sometimes obsess about avoiding repetition – that if I’d written about a subject in 1998 you’ll feel cheated if I write about the same subject here in 2023.
Which brings us to this week’s subject: “Shadow IT,” also known as “Rogue IT,” “DYI IT,” and, if you want to encourage Gartner in its perennial game of claiming concept ownership by attaching snappy new names to unoriginal concepts, “Citizen Developer.”
Or, you could use the handle my co-author Dave Kaiser and I introduced in There’s No Such Thing as an IT Project, as a contrast to Shadow IT, namely, “Illuminated IT.”
As with so many ideas in life, illuminated IT comes with trade-offs, making it sadly easy to succumb to confirmation bias when deciding whether to encourage it or try to stamp it out.
The stamp-it-out logic
End-users aren’t trained developers. They might fall short when it comes to application architecture, testing, or security. And if or when something goes wrong, IT will have to pick up the pieces.
Not only that, but when the end-user developer “calls in rich,” IT will be called in to support the mess the end-user developer left behind.
The philately-free logic (okay, it’s a stretch)
DIY development increases IT’s bandwidth – not once, but in two complementary ways.
The first is the obvious one: a DIY developer still counts as a developer. Maybe not an ideal developer, but I’ll bet not all of the developers housed within the IT organization are ideal ones either.
The less obvious one? For the most part, IT development, along with IT “development” (when IT configures and integrates commercial off-the-shelf software), involves a business analyst here and there. DIY IT does not.
Related: No more arguments about whether what IT delivers is what the business needs.
The best of both worlds
Once IT jettisons its protectionist instincts, and once business users jettison their IT-distrust instincts, getting the best of both worlds isn’t particularly complicated:
1. Encourage using what you already have. It isn’t uncommon that an application suite you already license provides the additional functionality the business needs. The formula for success: Inform, train, follow up.
2. Encourage COTS. If some application provider licenses a solution that does what business users need, it reduces the risk of losing support due to the end-user developer finding something else to do.
3. Establish platform standards. Whether it’s Excel, Access, or a “no-code/low-code” cloud-based alternative, setting one of these as the supported and recommended development environment reduces IT’s support burden. Once you’ve established the standard, offer training and support as needed.
4. Inventory. Ask business users to provide three layers of documentation for anything they develop. Layer 1 is the application’s title (“MS Word” is an application title). Layer 2 is the application’s headline (“General-purpose word-processing application” is an application headline. Layer 3 is a no-more-than-three sentence explanation of what the application does. With this inventory, should IT have to swoop in to save the day there’s a good starting point to swoop from.
5. Establish a Mentor Program, aka a Power Users Cool Kids Club. How to do this? See “Mentors are your friends. Be nice to your friends,” which first appeared in InfoWorld September 23, 1996.
Bob’s last word: For far too long, IT’s “best practice” on DIY development has been “We won’t do it for you and won’t let you do it for yourself.
Without a doubt, DIY development comes with some risks attached. But then, DIY prevention comes with risks of its own, namely, that various parts of the business will forgo important opportunities for technology-enabled improvements in effectiveness, all because a focus on what might go wrong blinds decision-makers to what might go right.
Now on CIO.com: “The 7 venial sins of IT management.” What it’s about: Seven mistakes to worry about that probably aren’t on your to-don’t list already.
This comment surprised me: “For the most part, IT development, along with IT “development”, involves a business analyst here and there. DIY IT does not.” As someone doing DIY IT in the 1980s and 2000s, I considered myself a programmer/analyst – as in somebody who lived in the business unit, was part of business meetings, and could then directly turn around and design and implement. Sounds like that was rarer than I realized.
To add clarity: I was referring to the job title and staffing impact. One way or another, someone does have to understand what’s needed (the “analyst” part of “programmer analyst”) along with needing someone who can get the computer to do what’s needed (the “programmer” part). And for whatever it’s worth, in a previous life I was a programmer/analyst too, and enjoyed the role immensely.
I am in non-IT R&D which often has fairly non-standard software needs. At the most extreme, who do you want to write or modify that molecular simulation or chemical kinetics program: A team of CS or software engineering graduates or a scientist or engineer who spent 5 years writing just such kinds of software in graduate school, but without formal software engineering training? Note that the software interacts with zero corporate databases, or only those non-financial and non-personal held locally in the R&D department, is never used by people outside the company, may only have a small single-digit number of users, and may have only text IO. It probably runs on Linux and just might start from existing code written in something like Fortran.
In other R&D applications, not quite as extreme, the entire project may be “R” and the most likely outcome is that the project runs its course over a year or two and is shelved after finding that the technology or the market isn’t what was hoped. In many cases in an environment like this, a “citizen developer” with domain-specific knowledge can safely bypass months of planning, requirements, design of UX, etc. and produce fit-for-purpose software. In one case, IT quoted, after a few iterations, 7 months (over $250k) to waterfall an application to take data from an instrument, make a large number of plots, and perform a few other mathematical operations to produce numbers of interest. The customer already had a spreadsheet that they were using, but it took more than a day to produce the plots and analyses for a days-worth of data. A citizen developer used macros in the same spreadsheet, written over a weekend plus about one-week of feedback iterations, to produce exactly what was required. The application only required one minor update over two years when the data format from the instrument changed. The entire project was shelved after those two years and the intellectual property was sold. Application cost? A few $1k. Application delivery time? Less than two weeks.
A key thing missing from many discussions of DIY software is that the software which is produced is an excellent prototype which encompasses the core of requirements and minimal UX design. The process of DIY development and use of the software means that, should IT eventually get involved, huge swaths of requirements are now obvious, a starting design has been prototyped, feature creep is largely eliminated since the project was Agile by default and has run through many iterations, and useful starting points for domain-specific routines have been written.
With the right attitudes and practices, DIY development can be a great boon to the organization, even when IT needs to take over and modify or replace the result.