A balanced approach to AI platform selection

I’m not sure why our industry keeps falling into the trap that when a new concept emerges, there are near-immediate announcements that it runs best on one platform. Enterprises shouldn’t even think about other options.

This VentureBeat article is an example, although it is more balanced than most. While many pundits present cloud computing as the only rational choice for AI, many hardware vendors declare that traditional hardware is the best option. Who’s right?

The nuances of platform selection

The questions I get at AI speaking events used to be some version of “what’s the best cloud?” Now it’s “where should I run AI?” Neither question has a black-and-white answer. A lot of planning must go into the selection process to define the best clouds and best AI platforms to solve specific problems.

Remember 10 years ago when the “cloud only” gang led the parade? Many enterprises in their thrall applied cloud computing to every problem. Unfortunately, those square-peg clouds fit into square-hole problems only about half the time.

It looks like we’re heading for the same old snare. The simplest way to avoid the pitfalls is to understand the specific business problems the enterprise wants to solve. Spoiler alert: The final answer won’t always be a public cloud.

I’ve been having fun discussing these “one versus the other” recommendations in professional conversations. Those who define a single-platform approach to AI often argue from the particular to the general, such as, “Yeah, it’s not correct in that specific business case, but generally it is,” which is illogical.

I don’t oppose cloud computing. It’s a logical host for many AI solutions, and I’ve often been the architect of them. Cloud has its own AI ecosystem that includes all the generative AI tool sets, on-demand scalability, and so forth.

Rest assured, multiple options are available to address your needs, and the final decision is yours. AI architects define a platform winner based on your business’s specific needs. The skilled ones will select the most cost-effective AI platform that will yield the highest value for your enterprise.

For AI, the cloud’s agility and the immediacy with which resources can be spun up or scaled down are invaluable in a field characterized by rapid evolution. Furthermore, cloud platforms have advanced security and operational stability measures that few enterprises can replicate internally. However, cloud is often too expensive and may not work for the compliance and security models in place for a specific use case. Also, did I say it was too expensive? That’s something you need to consider with a clear head.

Proponents of on-premises infrastructure argue for better control and compliance—particularly in highly regulated industries such as healthcare or finance. They cite potential cost savings for data-heavy workloads, improved latency and performance for specific tasks, and the autonomy to customize infrastructure without being tethered to cloud vendors’ constraints. These are all good points and are only relevant to a particular type of business case.

So, cloud or on-premises, how do you decide? It’s easier than you think. Use this process to guide you:

  1. Determine the business use case.
  2. Gain consensus on the business requirements.
  3. Consider the technology requirements.
  4. Select the correct platform.

Note that platform selection comes at the end. Too many people will declare that they are somehow “platform clairvoyant” and can pick your AI platform despite having no understanding of the problem that needs to be solved. Hardware and cloud providers are now doing this daily. Remember those square-peg solutions? Odds are that you have a round-hole problem.

Business case reigns supreme

You must understand the financial realities that lurk beneath any new technology or its application. AI-specific hardware (such as Nvidia’s high-performance GPUs) comes with a significant price tag. Cloud providers have the financial wherewithal to absorb and spread these costs across a broad user base. Conversely, enterprises that invest heavily in on-premises hardware face a perpetually daunting cycle of upgrades and obsolescence.

With that said, cloud providers too frequently come up with architectures that cost way too much. Even with the efficiencies we mentioned above, including the soft benefits of agility, the end cost significantly demolishes the value that comes back to the business. Also, there are opportunities for enterprises to carefully craft on-premises systems that do not need high-end, expensive processors. The notion that GPUs are mandatory for every AI application is just silly. We have AI systems running on smartphones, for goodness’ sake.

Edge computing further complicates the equation, particularly for latency-sensitive applications like autonomous vehicles and real-time analytics. Some enterprises might find deploying AI workloads on edge devices beneficial by gaining from reduced latency and enhanced performance.

Take advantage of each side’s strengths

Given the complex nature of the landscape, the choice between cloud and on-premises infrastructure should be more nuanced. Enterprises ought to adopt a hybrid approach that combines the strengths of both paradigms. For instance, businesses might deploy latency-sensitive or highly regulated workloads on-premises or at the edge while using the cloud for its cost efficiency, scalability, and access to complete AI ecosystems.

The question is not whether the cloud will dominate or if on-premises will stage a comeback, it’s about recognizing that both have their place. The goal should be to leverage the full spectrum of available resources to most effectively meet specific business needs. Cloud, on-premises, or both, enterprises that pursue an objective approach with a well-understood set of goals will navigate the complexities of AI adoption and position themselves to unlock their full transformative potential.

Copyright © 2024 IDG Communications, Inc.

Source