Six key takeaways from Google Cloud Next ’24

It wouldn’t have taken a billion-parameter large language model (LLM) to predict that the dominant theme of this year’s Google Cloud Next conference would be generative AI—indeed, it will probably be the dominant theme of the year for most enterprise software developers.

At the event, Google introduced a host of updates to its cloud platform to make working with LLMs easier, and added generative AI-based assistants to many of its offerings. Here are six key takeaways from the conference:

Recognizing that AI workloads differ from other workloads, Google showcased a range of updates to its cloud infrastructure to support them and help enterprises optimize cloud expenditure. First up: Google has made the latest iteration of its proprietary accelerator module for AI workloads, the Tensor Processing Unit (TPU) v5p, generally available in its cloud. The TPU pods now have support for Google Kubernetes Engine (GKE) and multi-host serving on GKE.

Additionally, under an expanded partnership with Nvidia, Google is also introducing the A3 Mega virtual machine (VM) to its cloud, powered by Nvidia H100 GPUs.

Other updates include a slew of optimizations, especially caching, in its storage products. These enhancements also come with a new resource management and job scheduling service for AI workloads, named the Dynamic Workload Scheduler.

Pair programming with Google’s AI coding tool won’t be a duet any longer, though. Google’s has changed the name of its previously released Duet AI for Developers, renaming it Gemini Code Assist to match the branding of its latest LLM.

Gemini Code Assist has new features to go with its new name. Based on the Gemini 1.5 Pro model, it provides AI-powered code completion, code generation, and chat services. It works in the Google Cloud Console, and integrates into popular code editors such as Visual Studio Code and JetBrains, while also supporting the code base of an enterprise across on-premises, GitHub, GitLab, Bitbucket, or multiple repositories.

The new enhancements and features added to Gemini Code Assist include full codebase awareness, code customization, and enhancements to the tool’s partner ecosystem that increases its efficiency.

In order to increase the efficiency of generating code, the company is expanding Gemini Code Assist’s partner ecosystem by adding partners such as Datadog, Datastax, Elastic, HashiCorp, Neo4j, Pinecone, Redis, Singlestore, Synk, and Stack Overflow.

For managing cloud services provider has introduced Gemini Cloud Assist, an AI-powered assistant designed to help enterprise teams manage applications and networks in Google Cloud.

Gemini Cloud Assist can be accessed through a chat interface in the Google Cloud console. It is powered by Google’s proprietary large language model, Gemini.

Enterprises also can use Gemini Cloud Assist to prioritize cost savings, performance, or high availability. Based on the natural language input given by any enterprise team, Gemini Cloud Assist identifies areas for enhancement and suggests how to achieve those goals. It can also be directly embedded into the interfaces where enterprise teams manage different cloud products and cloud workloads.

Apart from managing application life cycles, Gemini Cloud Assist can be used by enterprises to generate AI-based assistance across a variety of networking tasks, including design, operations, and optimization.

The Gemini-based AI assistant also has been added to Google Cloud’s suite of security operations offerings. It can provide identity and access management (IAM) recommendations and key insights, including insights for confidential computing, that help reduce risk exposure.

In order to compete with similar offerings from Microsoft and AWS, Google Cloud has released a new generative-AI tool for building chatbots, Vertex AI Agent Builder. It’s a no-code tool that combines Vertex AI Search and the company’s Conversation portfolio of products. It provides a range of tools to build virtual agents, underpinned by Google’s Gemini LLMs.

Its big selling point is its out-of-the-box RAG system, Vertex AI Search, which can ground the agents faster than traditional RAG techniques. Its built-in RAG APIs can help developers to quickly perform checks on grounding inputs.

Additionally, developers have the option to ground model outputs in Google Search to further improve responses.

Other changes to Vertex AI includes updates to existing LLMs and expanded MLops capabilities. 

The LLM updates includes a public preview of the Gemini 1.5 Pro model, which has support for 1-million-token context. Additionally, Gemini 1.5 Pro in Vertex AI will also be able to process audio streams, including speech and audio from videos.

The cloud service provider has also updated its Imagen 2 family of LLMs with new features, including photo editing capabilities and the ability to create 4-second videos or “live images” from text prompts. Other LLM updates to Vertex AI includes the addition of CodeGemma, a new lightweight model from its proprietary Gemma family.

The updates to MLops tools includes the addition of Vertex AI Prompt Management, which helps enterprise teams to experiment with prompts, migrate prompts, and track prompts along with parameters. Other expanded capabilities include tools such as Rapid Evaluation for checking model performance while iterating on prompt design.

Google Cloud has added capabilities driven by its proprietary large language model, Gemini, to its database offerings, which include Bigtable, Spanner, Memorystore for Redis, Firestore, CloudSQL for MySQL, and AlloyDB for PostgreSQL.

The Gemini-driven capabilities include SQL generation, and AI assistance in managing and migrating databases.

In order to help manage databases better, the cloud service provider has added a new feature called the Database Center, which will allow operators to manage an entire fleet of databases from a single pane.

Google has also extended Gemini to its Database Migration Service, which earlier had support for Duet AI.

Gemini’s improved features will make the service better, the company said, adding that Gemini can help convert database-resident code, such as stored procedures, functions to PostgreSQL dialect.

Additionally, Gemini-powered database migration also focuses on explaining the translation of the code with a side-by-side comparison of dialects, along with detailed explanations of the code and recommendations.

As part of these updates, the cloud services provider has added new generative AI-based features to AlloyDB AI. These new features include allowing generative AI-based applications to query data with natural language and a new type of database view.

Google at Google Cloud Next 24 unveiled three open source projects for building and running generative AI models.

The newly unveiled open source projects are MaxDiffusion, JetStream, and Optimum-TPU.

The company also introduced new LLMs to its MaxText project of JAX-built LLMs. The new LLM models in MaxText include Gemma, GPT-3, Llama 2, and Mistral, which are supported across both Google Cloud TPUs and Nvidia GPUs.

Copyright © 2024 IDG Communications, Inc.

Source