CIO Influence
CIO Influence News Cloud Machine Learning

New Updates and Outbreaks from Day 2 of Next’24

Las Vegas bids a farewell to day 2 of Google Cloud Next ’24, where attendees immersed themselves in cutting-edge updates and breakthroughs. The day 2 highlights centered on Gemini and AI agents, offering a glimpse into the future of technology. In a riveting developer keynote, Google Cloud Chief Evangelist Richard Seroter unveiled Gemini’s transformative potential, emphasizing its capacity to push boundaries further than ever before. As the event progresses, anticipation builds for the revelations awaiting in Day 3’s sessions.

In a comprehensive showcase with live demonstrations, Richard, alongside Senior Developer Advocate Chloe Condon, delved into many Google Cloud AI technologies and integrations. Together with fellow Googlers and partners, they explored how these innovations facilitate the fundamental tasks of Google Cloud customers: building, running, and operating exceptional applications. Now, let’s learn more about the closer examination of these groundbreaking advancements.

#1 Developers Transformations with Google Cloud AI Technologies

The unveiling of Gemini Code Assist marks a pivotal moment for developers worldwide, a testament to Google Cloud’s commitment to innovation. Under the guidance of Google Cloud VP and GM Brad Calder, the audience witnessed unparalleled support for Gemini 1.5 within Code Assist, boasting a remarkable 1 million token context window—an industry-leading feat.

Subsequently, Jason Davenport, Google Cloud Developer Advocate, demonstrated the transformative capabilities of Gemini Cloud Assist. This groundbreaking tool streamlines application design, operation, troubleshooting, and optimization by harnessing context from the user’s specific cloud environment and resources, ranging from error logs to firewall rules.

Additionally, with Gemini seamlessly integrated across various Google Cloud applications such as BigQuery and Looker, coupled with support for vector search and embedding in Google Cloud databases, developers can leverage AI capabilities like never before. By harnessing multi-modal inputs, including text and images, developers can expedite the creation of recommendations, predictions, and syntheses—revolutionizing the development process.

Fueling this innovation are new enhancements announced during the event:
  • App Hub: Today, App Hub delivers precise, up-to-date representations of deployed applications and their resource dependencies, irrespective of the Google Cloud products utilized.
  • BigQuery Continuous Queries: Now in preview, BigQuery offers continuous SQL processing over data streams, facilitating the creation of real-time pipelines integrated with AI operators or reverse ETL.
  • Natural Language Support in AlloyDB: Leveraging Google’s state-of-the-art ScaNN algorithm, AlloyDB users benefit from enhanced vector performance reminiscent of Google’s most renowned services.
  • Gemini Code Assist in Apigee API Management: Integrating Gemini into Apigee API management empowers developers to construct enterprise-grade APIs and integrations effortlessly, leveraging natural language prompts.

In collaboration with Google Cloud Product Manager Femi Akinde and Senior Developer Advocate Chloe Condon, attendees were enlightened on the seamless transition from concept to realization, transforming innovative ideas into immersive AI applications within minutes. This array of advancements underscores Google Cloud’s unwavering commitment to empowering developers with cutting-edge AI technologies.

#2 Advancing AI Applications to Production Grade with Google Cloud Platforms

Transitioning from development to production-grade status poses a significant challenge for AI applications. As Google Cloud Developer Advocate Kaslin Fields highlighted, this challenge underscores the importance of robust infrastructure and streamlined deployment processes.

Fortunately, Google Cloud offers solutions tailored to meet these demands. Cloud Run stands out for its unparalleled speed in deploying and scaling applications, providing developers with a swift path to production readiness. Platforms like Google Kubernetes Engine (GKE) also offer a comprehensive feature set that is ideal for powering even the most complex or unique AI applications.

Key improvements facilitating this transition include:

  • Cloud Run Application Canvas: Empowering developers to generate, modify, and deploy AI applications seamlessly within Cloud Run. This integration extends to Vertex AI, enabling effortless consumption of generative APIs from Cloud Run services with just a few clicks.
  • Gen AI Quick Start Solutions for GKE: Equipping developers with pre-configured solutions for running AI on GKE, incorporating popular patterns such as Retrieval Augmented Generation (RAG) or integration with Ray.
  • Support for Gemma on GKE: GKE now offers multiple pathways for running Gemma, Google’s open model based on Gemini. The performance delivered by this integration is exceptional, ensuring optimal functionality and efficiency.

#3 Boosting Operational Efficiency with Advanced AI Tools and Infrastructure

Addressing the complexities inherent in operating AI applications, Google Cloud Reliability Advocate Steve McGhee emphasized the emergence of novel challenges during the developer keynote. Indeed, as Charity Majors, co-founder and CTO at, highlighted, modern systems exhibit dynamic and chaotic behaviors, necessitating a shift in operational paradigms.

While generative AI introduces unpredictability, it offers a new suite of tools to navigate and manage change effectively. Key advancements facilitating this operational resilience include:

Vertex AI MLOps Capabilities: Currently in preview, Vertex AI Prompt Management empowers customers to experiment with, migrate, and track prompts and parameters. It facilitates the comparison of prompt iterations and assessment of their impact on outputs. Vertex AI Rapid Evaluation aids users in evaluating model performance during prompt design iterations.

Shadow API Detection: Available in preview as part of Advanced API Security, shadow API detection enables the identification of APIs lacking proper oversight or governance, mitigating the risk of security incidents.

Confidential Accelerators for AI Workloads: Leveraging Confidential VMs on the A3 machine series equipped with NVIDIA Tensor Core H100 GPUs, Google Cloud extends hardware-based data and model protection to GPUs handling sensitive AI and machine learning data.

GKE Container and Model Preloading: In preview, GKE introduces the capability to preload containers and models, accelerating workload cold-start to enhance GPU utilization, reduce costs, and maintain low AI inference latency.


Q1: What is Gemini Code Assist and how does it benefit developers?

Gemini Code Assist is a groundbreaking tool introduced at Google Cloud Next ’24, offering advanced support for developers. It enhances application design, operation, troubleshooting, and optimization by leveraging context from the user’s specific cloud environment and resources. With features like support for a 1 million token context window and seamless integration across Google Cloud applications, Gemini Code Assist revolutionizes the development process, empowering developers to create exceptional applications effortlessly.

2: What are the key enhancements announced during the event to facilitate AI development?

Several new enhancements were announced during the event to bolster AI development on Google Cloud. These include the introduction of App Hub, BigQuery Continuous Queries, Natural Language Support in AlloyDB, and Gemini Code Assist in Apigee API Management. Each enhancement aims to streamline the development process, providing developers with powerful tools and integrations to build enterprise-grade applications efficiently.

3: How does Google Cloud support the transition of AI applications to production grade?

Google Cloud offers robust solutions tailored to support the transition of AI applications to production grade. Cloud Run, with its rapid deployment and scaling capabilities, provides developers with a seamless path to production readiness. Additionally, platforms like Google Kubernetes Engine (GKE) offer comprehensive features, such as Cloud Run Application Canvas and Gen AI Quick Start Solutions, to empower developers in running and managing AI workloads efficiently.

4: What advancements have been made to enhance operational efficiency in managing AI applications?

Google Cloud has introduced several advancements to enhance operational efficiency in managing AI applications. Vertex AI MLOps Capabilities, including Prompt Management and Rapid Evaluation, facilitate experimentation, migration, and evaluation of AI models. Shadow API Detection helps identify APIs lacking proper oversight or governance, reducing the risk of security incidents. Furthermore, features like Confidential Accelerators for AI Workloads and GKE Container and Model Preloading optimize performance and reduce latency, ensuring smooth operation of AI applications.

[To share your insights with us as part of editorial or sponsored content, please write to]

Related posts

Akeyless Security Names former Palo Alto Networks Executive Suresh Sathyamurthy as Chief Marketing Officer

PR Newswire

ITV Surfaces 46,000 Hours of Content with the SnapLogic Platform

CIO Influence News Desk

NXP Introduces Advanced Automotive Radar One-Chip Family for Next-Gen ADAS and Autonomous Driving Systems