CIO Influence
IT and DevOps

APAC’s Generative AI and Cloud Infrastructure Adoption Insights

APAC's Generative AI and Cloud Infrastructure Adoption Insights

Generative AI Adoption in APAC Businesses

Generative AI is emerging as a transformative force within Asia-Pacific (APAC) organizations, permeating industries like healthcare, citizen services, and various other spheres. The deployment and experimentation with GenAI reflect promise and apprehension amid cost-related concerns and multifaceted challenges. Aaron Tan delves deeper into the landscape.

PREDICTIONS SERIES 2024 - CIO Influence

Within the domain of Singapore’s Government Technology Agency (GovTech), a dedicated team of engineers endeavors to craft a pioneering chatbot fueled by generative AI. This chatbot aims to assist citizens in navigating career transitions and making informed choices. Chang Sau Sheong, Deputy Chief Executive for Product and Engineering at GovTech, highlighted this innovative project during an interview with Singapore media at Google Cloud Next 2023 in San Francisco. He underlined the potential of the chatbot to augment the government’s career coaching program, enabling scalability beyond the limitations posed by human career coaches.

Additionally, GenAI is finding utility in unexpected domains, such as facilitating bookings for high-demand badminton courts in public sporting facilities across Singapore. Sport Singapore is exploring the development of a GenAI-powered chatbot, envisioned to streamline the reservation process based on specific parameters.

The enthusiasm surrounding GenAI in APAC is palpable, as recent studies indicate a staggering 75% of respondents are planning to adopt or are already experimenting with generative AI within the next 12 months. Financial commitments also underscore this enthusiasm, with over half of the organization’s budgets earmarked for GenAI initiatives. However, the swift ascendancy of GenAI raises both awe and concern, positioning it as a critical strategic initiative after digital transformation, automation, cybersecurity, and cost-cutting.

Despite the burgeoning interest and rapid integration into strategic agendas, APAC organizations grapple with challenges. Issues like algorithmic transparency, data quality, and expertise scarcity persist, potentially hindering seamless GenAI integration. Furthermore, apprehensions regarding intellectual property (IP) infringement linger, prompting a significant proportion of respondents to consider open-source models or develop proprietary solutions in-house to retain control over data and IP. Organizations like Singapore’s National University Health System (NUHS) are partnering with tech giants like Amazon Web Services (AWS) to pilot GenAI solutions for healthcare, focusing on automating the creation of patient discharge summaries. NUHS emphasizes precision, security, and exhaustive testing of GenAI applications, mirroring the broader trend within APAC organizations to reinforce AI governance protocols.

As APAC organizations navigate the landscape of GenAI, the cost implications loom large. While a fraction is willing to pay a premium for GenAI-integrated products or services, a significant percentage remains uncertain, highlighting the pressing need for robust financial operations (FinOps) to manage GenAI workload expenses efficiently. Navigating the intricacies of GenAI adoption demands a strategic approach. Bhargs Srivathsan of McKinsey underscores the importance of selecting a suitable model tailored to organizational needs, cautioning against overspending on needlessly complex models. The careful calibration of GenAI usage can significantly impact the feasibility and success of its integration within business frameworks.

Interdependency of Generative AI and Cloud Infrastructure

McKinsey’s Bhargs Srivathsan recently expounded upon the symbiotic relationship between Generative Artificial Intelligence and cloud computing, elucidating how their synergy accelerates cloud migration endeavors for organizations. Aaron Tan provides insight into this dynamic correlation.

At the Cloud Expo Asia in Singapore, Bhargs Srivathsan, a partner at McKinsey and co-leader of the consultancy’s cloud operations and optimization initiatives, emphasized the inherent alignment between GenAI and cloud technologies. Srivathsan underscored the indispensable role of cloud infrastructure in realizing GenAI’s potential and, reciprocally, how GenAI facilitates the simplification of migrating operations to public cloud platforms.

One notable application of GenAI lies in its capacity to decode and transmute legacy code, such as Cobol, into languages compatible with cloud-native environments. Srivathsan outlined how GenAI aids in modernizing legacy databases during the migration process by deciphering database schemas and proposing potential data structures based on data definition language (DDL) instructions.

Integrating GenAI tools in cloud migration strategies exhibits the potential to slash migration durations by an estimated 30-40%.

Srivathsan highlighted the maturation of large language models (LLMs) and the emergence of diverse use cases and tools as pivotal factors contributing to streamlining workload migration to public cloud infrastructures.

Leading cloud service providers like Amazon Web Services (AWS) and Google Cloud have already established platforms and repositories—”model gardens”—that enable organizations to construct, train, and execute their models, facilitating seamless integration of GenAI capabilities. Srivathsan emphasized the cloud’s pivotal role in initiating GenAI efforts, cautioning against in-house deployment due to constraints related to proprietary datasets, security concerns, data privacy, and potential intellectual property encumbrances.

While advocating for cloud-based GenAI deployment, Srivathsan acknowledged the industry’s limitations in terms of graphics processor availability. She also delineated organizations’ strategic approach, typically commencing with off-the-shelf models to validate business cases before expanding GenAI usage across the enterprise. Refinement of models through proprietary data and leveraging hyperscale environments for inferencing aids in achieving scalability and flexibility.

Looking ahead, Srivathsan projected a future where organizations might host some models closer to their premises, potentially training models concurrently with inferencing activities. However, she anticipated limited inferencing at the edge, primarily reserved for mission-critical applications mandating ultra-low latency, such as autonomous driving or real-time decision-making in manufacturing settings.

Srivathsan underscored the pivotal role of correct cloud implementation, encompassing robust security controls, appropriate data schemas, and architectural decisions, as a catalyst in expediting GenAI adoption, fostering a significant competitive edge. The prudent selection of suitable models emerged as a critical aspect of preventing excessive costs incurred during GenAI endeavors. She advised organizations to meticulously identify models tailored to their needs to ensure cost-effectiveness and operational efficiency. Organizations intending to deploy and optimize their proprietary models should contemplate the requisite data pipelines for launch and ascertain the datasets integral to their operations. Srivathsan emphasized the importance of addressing the data aspect comprehensively, stressing the necessity of establishing vigilant MLOps protocols to monitor data manipulation and model adjustments effectively.

Red Hat’s Summit Highlights: OpenShift AI and Source Code Access

At the Red Hat Summit earlier this year, Red Hat expanded its platform capabilities by introducing OpenShift AI, which addresses the evolving requirements of organizations planning to integrate more artificial intelligence workloads within applications operating on OpenShift. This strategic move aligns with the company’s overarching objective of establishing itself as the preferred platform for developers and infrastructure operators, offering a distributed IT environment across public and private clouds and edge computing networks.

OpenShift AI represents a standardized foundation enabling the development of production-grade AI and machine learning models. Furthermore, Red Hat partnered with IBM on Ansible Lightspeed, collaborating to train IBM’s Watson Code Assistant to create Ansible automation playbooks. However, amidst these advancements in AI, Red Hat faced scrutiny from the open-source community following its decision to restrict access to the source code of Red Hat Enterprise Linux (RHEL). This decision, announced after the summit, was implemented to deter profiteering from RHEL code without contributing value to the software.

In an interview with Computer Weekly, Red Hat CEO Matt Hicks discussed the company’s commitment to facilitating the adoption of generative AI across hybrid cloud environments and navigating the competitive landscape for machine learning operations (MLOps) tools. He also addressed concerns surrounding the RHEL source code limitation and how Red Hat intends to assuage community apprehensions about the decision. Hicks highlighted several critical announcements made at the Red Hat Summit, emphasizing their significance for the company’s future trajectory. He began with AI, noting its inherently hybrid workload nature, requiring training in expansive environments and execution closer to end-users. Red Hat’s focus on open hybrid cloud architecture aligns with customers’ evolving needs in navigating the impact of AI on their businesses.

The CEO outlined three core areas of development presented at the summit: secure supply chain initiatives, Service Interconnect, and the introduction of a developer hub. These endeavors aimed to fortify the hybrid cloud landscape, enabling seamless connectivity and accessibility while fostering an environment conducive to developing diverse applications, including AI workloads. Regarding AI challenges, Hicks addressed the issue of explainability, particularly concerning large language models. He highlighted efforts in Ansible Lightspeed with IBM to deliver domain-specific AI generation capabilities. He emphasized the importance of transparency in AI recommendations, ensuring adherence to licensing, copyright, and trademark regulations.

He detailed Red Hat’s role in MLOps within OpenShift AI, emphasizing the need for a disciplined approach in tracking model training and data modifications to ensure reproducibility and accuracy. Hicks differentiated between various AI model training approaches, ranging from broad training on publicly available data to more specialized models confined within specific domains. Discussing collaborative efforts, Hicks stressed Red Hat’s position as a platform-focused company, aiming to optimize model deployment on diverse hardware configurations while fostering partnerships with entities like IBM and other specialized capabilities providers.

Transitioning to the topic of RHEL’s source code restriction, Hicks addressed community concerns, highlighting Red Hat’s efforts to facilitate access to RHEL for contributors while rationalizing the decision through offerings like CentOS Stream and Fedora for more aggressive modifications. He emphasized RHEL’s availability for customers and acknowledged the challenge of community perception regarding the decision. Regarding Red Hat’s future focus, Hicks reiterated the company’s commitment to its platform-centric approach. He emphasized their role as a platform provider facilitating a broad spectrum of technological shifts, from data centers to edge computing, with a relentless focus on addressing evolving enterprise needs.

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

MCCI Unveils Revolutionary Tanager Product, Empowering the Industry with Cutting-Edge Video Analysis Technology

PR Newswire

Caveonix and True Zero Technologies Announce Partnership

CIO Influence News Desk

Scala Announces the Launch of Latest Release of Flagship Digital Signage Platform Scala Enterprise 12.50

CIO Influence News Desk