Recently, hackers have targeted Python’s widely used libraries in the Python Package Index (PyPI). These libraries are often only downloaded and installed when a program is run, making it hard to detect or realize that there is a problem.  These libraries also are widely used in a number of program.  In this case, the libraries were downloaded thousands of times, and nobody notice until after significant damage was done.

For commercial software vendors, such a vulnerability would (hopefully) be caught in testing. But for IT managers and shadow IT users that are exploring Open Source tools, enterprise or not? Well, that’s like skating on thin ice with a blowtorch. One wrong download, and you’ve potentially opened the door for cyberattacks—no warning signs or red flags in sight.

I love the open-source software model. I have built a large part of my career on this. But, like everything, there are some challenges, and one of these that is emerging are some new and not so new security threats.

So what might help mitigate some of the risks?

I was inspired by some really smart people in Public Health.  Taking a page from harm reduction approaches used in public health— we could apply similar principles to open source and shadow IT. Instead of saying “No,” what if we tried to adopt a “use safely” approach?

  • Don’t assume safety because it’s free.
    Just because something’s open source doesn’t mean it’s been vetted. To be clear, I think paranoia is probably helpful. Open-source libraries are like free samples—some are great, but others may leave you feeling a bit sick. Remind your teams that due diligence is required especially with open-source software.
  • Harm reduction for shadow IT.
    Shadow IT is here to stay, and rather than play the whack-a-mole game of shutting it down, why not embrace it safely? Encourage teams to innovate and explore new tools, but do it in a controlled way. Use completely segregated sandbox environments to offer a safe place for testing. In harm reduction, safe injection sites minimize risk; in IT, secure environments can do the same.
  • Scan, and realize that scanning probably isn’t enough.
    Insist on using automated scanning tools for every piece of code, and realize that it may not be drilling deep enough into runtime libraries and tools from platforms like PyPI. Just like health officials promote regular testing to prevent the spread of disease, your security team should promote automated scanning tools to ensure no malicious code slips through.
  • Community matters.
    One of the strengths of open source is the community behind it, but this also means there’s no singular authority watching over every package. Get involved in the community where you can. Have your developers contribute and review code. It’s not just about using tools; it’s about being part of a network that looks out for one another. In the public health space, community efforts are critical to harm reduction. The same applies here.
  • Update religiously—Or not. 
    Keeping software and libraries updated is a no-brainer—or is it?  In open-source environments, patches and updates roll out fast. It seems that the hackers in this case managed to commit code to widely used libraries and nobody noticed.   In this case, it probably would have made sense for a developer to wait and see.

We all like learning from other fields—and I think we all have been paying a lot of attention to public health over the last few years.  By applying these harm-reduction lessons to open source, we can balance innovation and security in a way that reduces risk without killing creativity. SAP, Python, or any other platform—it’s all the same story: prevention beats recovery.

Every so often you get a glimpse of the future, but it may not be quite what the people writing the message are hoping that you glean.

I just got back from the International Manufacturing Technology Show in Chicago, and was incredibly impressed by new machine tools, robots everywhere and really smart people trying to figure out the ROI of the latest and greatest systems for manufacturing.

Of course, AI was a prominent topic.  Some big infrastructure companies were promising how their solutions could help improve customer sales and experiences, drive connected and sustainable operations, and deliver faster and better R&D and product design.  Production planning, supply chain and marketing are all part of these stories as well, and frankly, what is being discussed is well thought out, and seems plausible.  There seem to be enough experiments going on at enough companies to learn what is working and not working.   One number that stood out from a reputable source is that 60% of manufacturing companies are using Generative AI in production for something right now.

Not surprisingly, the software companies that are linked the closest to design and engineering are some of the software companies making the biggest claims about their new and upcoming products.  After Marketing Automation software,  CAD, Design, and engineering software might be the next disrupted marketing for generative AI.  I don’t see Mechanical Engineers, GIS professionals or others being replaced, but their lives being made significantly easier by having a helper to manage small dimensional updates, material changes, etc.  As an example, having an AI help in routine Product Lifecycle Management tasks would be godsend.

Now to the dog that didn’t bark—The backend for all of this.

For engineering and design software to be truly AI enabled, it needs training and historical data, and lots of it.  This sort of data is specialized enough that it doesn’t always or necessarily translate easily to traditional data warehouses, RDBMS, etc.   Additionally, the data might be on slow SANs or spinning drives collecting dust, or not online all of the time.

Additionally, for years, software companies have been eager to shift more CPU and GPU tasks to the cloud to harness the full power of large-scale rendering and design computing. However, this ambition has been hampered by bandwidth limitations and load issues—both internally and across the internet.  I think a real computer scientist would throw up their hands in despair about the fact that these loads are not easy to cache, out of order, and hard to store in any sort of persistent manner.   I spent a big part of today, trying to figure out how to talk about the load challenges of nested products in a manufacturing planning algorithm (MRP), and realizing even these loads are challenging.

Ironically, the companies that will solve this issue weren’t present at IMTS. The solution lies with cloud providers, who will need to re-engineer data centers and develop new interconnection models that allow seamless access to massive datasets and decentralized GPU and CPU processing. I believe this evolution will lead to what I call The Great Re-Cloud of 2026.  The Old Cloud is going to need to be replaced by the New Cloud.

In this future, you, Mr or Ms Director of IT, are going to be presented some fantastic new options from your Enterprise Software Publisher.  These new, game changing capabilities, however, are going to require a deeper sort of migration than you have been faced with previously, and advanced Cloud performance is going to be one of your evaluation criteria.  The smart software publishers will partner with cool startups, and the not so smart publishers will try to build their own infrastructure (poorly).  Some of these attempts are going to not work out, and it probably makes sense to have a Plan B in your back pocket.

You, however, will become an expert in high performance computing (Or recruit and develop trusted experts for your team).   For the nerds, the best days are ahead of us.