Google has added new large language models (LLMs) and a new agent builder feature to its AI and machine learning platform Vertex AI at its annual Google Cloud Next conference.
The LLM include a public preview of the Gemini 1.5 Pro model, which has support for 1-million-token context.
The expanded support for context allows native reasoning over enormous amounts of data specific to an input request, the company said, adding that it has received feedback from enterprises that this expanded support can eliminate the need to fine-tune models or employ retrieval augmented generation (RAG) to ground model responses.
Additionally, Gemini 1.5 Pro in Vertex AI will also be able to process audio streams, including speech and audio from videos.
Google said the audio processing capability provides users with cross-modal analysis, providing insights across text, images, videos, and audio.
The Pro model will also support transcription, which can be used to search audio and video content, the company added.
The cloud service provider has also updated its Imagen 2 family of LLMs with new features, including photo editing capabilities the ability to create 4-second videos or “live images” from text prompts.
While the text-to-live-images feature is in preview, the photo editing capabilities have been made generally available alongside a digital watermarking feature that allows users to tag AI generated images.
Other LLM updates to Vertex AI includes the addition of CodeGemma, a new lightweight model from its proprietary Gemma family.
In order to help enterprises with grounding models to get more accurate responses from them, Google will allow enterprise teams to ground LLMs in Google Search as well as their own data via Vertex AI.
“Foundation models are limited by their training data, which can quickly become outdated and may not include information that the models need for enterprise use cases,” the company said, adding that grounding in Google Search can significantly improve accuracy of responses.
Expanded MLops capabilities in Vertex AI
The cloud service provider has expanded MLops capabilities in Vertex AI to help enterprises with machine learning tasks.
One of the expanded capabilities is Vertex AI Prompt Management, which helps enterprise teams to experiment with prompts, migrate prompts, and track prompts along with parameters.
“Vertex AI Prompt Management provides a library of prompts used among teams, including versioning, the option to restore old prompts, and AI-generated suggestions to improve prompt performance,” the company said.
The prompt management feature also allows enterprises to compare prompt iterations side by side to assess how small changes impact outputs while allowing teams to take notes, it added.
Other expanded capabilities includes evaluation tools, such as Rapid Evaluation, which can evaluate model performance when iterating on prompt design. Rapid Evaluation is currently in preview.
Apart from adding new capabilities to the models, the company has expanded data residency for data stored at rest for Gemini, Imagen, and Embeddings API’s on Vertex AI to 11 new countries—Australia, Brazil, Finland, Hong Kong, India, Israel, Italy, Poland, Spain, Switzerland, and Taiwan.
Vertex AI gets new agent builder
In order to compete with rivals such as Microsoft and AWS, Google Cloud has released a new generative-AI-based agent builder offering.
Named Vertex AI Agent Builder, the no code offering, which is a combination of Vertex AI Search and the company’s Conversation portfolio of products, offers a range of tools to build virtual agents, underpinned by Google’s Gemini LLMs, faster.
The no-code offering’s advantage is its out-of-the-box RAG system, Vertex AI Search, which can ground the agents faster compared to traditional RAG techniques which are time consuming and complicated.
“Only a few clicks are necessary to get up and running, and with pre-built components, the platform makes it simple to create, maintain, and manage more complicated implementations,” the company said in a statement.
RAG APIs built into the offering can help developers to quickly perform checks on grounding inputs, it added.
For even more complex implementations, Vertex AI Agent Builder offers vector search to build custom embeddings-based RAG systems as well.
Further, developers also have the option to ground model outputs in Google Search in order to further improve responses.
The range of tools included in the no code offering includes Vertex AI extensions, functions and data connectors.
While Vertex AI extensions are pre-built reusable modules to connect an LLM to a specific API or tool, Vertex AI functions helps developers describe a set of functions or APIs and have Gemini intelligently select, for a given query, the right API or function to call, along with the appropriate API parameters, the company said.
The data connectors, on the other hand, help ingest data from enterprise and third-party applications like ServiceNow, Hadoop, and Salesforce, connecting generative applications to commonly used enterprise systems, it added.
In addition to all Vertex AI updates, the company has added Gemini to its business intelligence offering, Looker.
The infusion of Gemini in Looker will add capabilities such as conversational analytics, report and formula generation, LookML and visualization assistance, and automated Google slide generation to the platform.
Other updates to data analytics’ suite of offerings include bringing forth a managed version of Apache Kafka for BigQuery and continuous query for the same service in preview.
Copyright © 2024 IDG Communications, Inc.