What kind of future will AI bring enterprise IT?

If you’re an enterprise looking for ways to come through a recession stronger while beating out competitors in the process, open source isn’t the answer. Neither is cloud. It’s true that both can be helpful. Both are ingredients in how enterprises should rethink their traditional approaches to IT. But neither will do much to distinguish you.

Why? Because everyone else is already using open source and cloud, too. There was a time when being first to embrace the economics of open source projects like Linux or MySQL could set a company apart, but not anymore. Enterprise adoption of cloud is still nascent (roughly 10% of all IT spending in 2022, per Gartner estimates), but adoption is moving at such a pace that you’re probably not going to distinguish your customer experience through cloud alone. What will set you apart?

Machine learning (ML) and artificial intelligence (AI). But maybe not how you think.

Thinking incrementally about AI

This is not one of those articles touting AI/ML as some ill-defined panacea. Yes, AI and ML have been instrumental in developing potent medicines to combat COVID-19, and they could even someday help find a cure for cancer. But there’s no magical AI/ML fertilizer that you pour onto moribund IT projects and they magically blossom. Companies like Google or Uber have been on the vanguard of AI/ML, but let’s face it: You don’t have their engineering talent.

Even these companies are using the downturn to spend less time on moon shots and more time on incremental advances, as a recent article in The Wall Street Journal (“Big Tech Stops Doing Stupid Stuff“) calls out: The tech sector “that has long worked to disrupt is now focusing on enhancing what already exists.” Instead of reinventing wheels, the article notes, “The best tech investments of 2023 might be companies content to spend their coin greasing [the wheel].”

One big way enterprises are doing this is with AI/ML, but not with gee-whiz flying cars. AI/ML is being used in far more pedestrian (and useful) ways.

Zillow spent years trying to use AI/ML models to go big on flipping houses. In late 2021, however, the company exited that business, citing an inability to forecast prices despite sophisticated models. Instead, Zillow has turned pragmatic and is using AI/ML to help would-be renters see listings as they walk a city and enabling landlords to construct floorplans from photos of those apartments. Much less sexy than a billion-dollar house-flipping business, and much more useful for customers.

Google, for its part, has started offering retailers the ability to track store inventory by analyzing video data. Google trained its models on a data set of more than one billion product images. It can recognize the image data whether it comes from a mobile phone or an in-store camera. If it works as advertised, it would be a significant boon for retailers that traditionally have struggled to get a handle on inventory. Not a sexy use of AI/ML, but useful for retail customers.

Microsoft, a leader in AI/ML, just made a huge investment in OpenAI, with the reported intention of bringing GPT-esque functionality to its productivity apps, such as Word or Outlook. Microsoft has the resources to bet big on a moon shot makeover of Office, perhaps making it entirely voice driven. Instead, it’s likely going to give Office a serious Clippy upgrade with a GitHub Copilot sort of approach. That is, GPT might take over some of the undifferentiated heavy lifting of writing docs or building spreadsheets. Less sexy, more useful.

Choosing not to fail with AI

The incremental approach turns out to be the smartest way to build with AI/ML. As AWS Serverless Hero Ben Kehoe argues, “When people imagine integrating AI … into software development (or any other process), they tend to be overly optimistic.” A key failing, he stresses, is belief in AI/ML’s potential to think without a commensurate ability to fully trust its results: “A lot of the AI takes I see assert that AI will be able to assume the entire responsibility for a given task for a person, and implicitly assume that the person’s accountability for the task will just sort of … evaporate?”

In the real world, developers (or others) have to take responsibility for outcomes. If you’re using GitHub Copilot, for example, you’re still responsible for the code, no matter how it was written. If the code ends up buggy, it won’t work to blame the AI. The person with the paystub will bear the blame, and if they can’t verify how they arrived at a result, well, they’re likely to scrap the AI model before they’ll give up their job.

This is not to say that AI and ML don’t have a place in software development or other areas of the enterprise. Just look at the examples from Zillow, Google, and Microsoft. The trick is to use AI/ML to complement human intelligence and allow that same human intelligence to fact-check results. As Kehoe suggests, “When looking at claims AI is going to automate some process, look for what the really hard, inherent complexity of that process is, and whether the process would be successful if a large degree of (new) uncertainty [through black-box AI] was injected into that complexity.”

Adding uncertainty and making accountability harder is a non-starter. Instead, enterprises will look for areas that allow machines to take on more responsibility while still leaving the people involved accountable for the results. This will be the next big thing in enterprise IT, precisely because it will be lots of small, incremental things.

Copyright © 2023 IDG Communications, Inc.

Source

Originally posted on January 16, 2023 @ 4:01 pm