CIO Influence
AIOps Analytics Featured Machine Learning

Potential Risks in Going Ahead with a Full-blown Generative AI Strategy

Potential Risks in Going Ahead with a Full-blown Generative AI Strategy

Generative AI has the potential to completely transform the existing business models with faster-to-scale automated product development life cycles and augmentation. To make generative AI fulfill current and future business needs, decision-makers have to rely on a strong foundation for their generative AI strategy. This strategy not only supports investments in AI capabilities but also realign the whole approach to people-led data-driven transformations within and outside the organization. As generative AI becomes more democratized and easily accessible to everyone, the onus would be on the business leaders to devise a strong and compliant generative AI strategy for 2023 and beyond. While there’s no denying that generative AI and LLMs are for everyone to benefit from, there are inherent and invisible risks and regulatory barriers that could also jeopardize everything that an organization stands for.

Seeing the larger picture, we decided to explore the various risks and regulatory issues that could arise from generative AI. Here are the biggest risks in going ahead with a generative AI strategy without preparation or planning.

Risk 1: Trustworthiness

Working with Generative AI is still a black-box concept for many users.

Generative AI augments human performance and removes repeat tasks with ever-evolving self-trained algorithms. It comes at a heavy price though.And, most of it is related to justifying trust these GAIs and LLMs can earn from their users and regulators.

Generative Artificial Intelligence tools like ChatGPT, DALL-E, and Make-A-Video (by Meta) are created using more than 45 terabytes of data, backed by millions of dollars of investment in research, training and iterations. Currently, a majority of generative AI tools are supported on infrastructures (hardware and VMs) provided and managed by the largest enterprise Cloud services providers such as AWS, Google Cloud and Microsoft Azure. Experts have highlighted the issues related to trustworthiness and the risks of going overboard with generative AI tools such as ChatGPT. Because it is developed using huge volume of complex and heterogeneous data collected from different sources, it is practically impossible to track and predict the role of variables in generating the final results. Without creating a tangible roadmap to measure “truth of prediction” in every generative AI content, its trustworthiness would always remain questionable. And, that’s why companies that use generative AI in their business processes should keep an eye on scaling their efforts to build Augmented Large Language Models (ALMs) with explainable AI models.

Developing a generative AI strategy should always have humans to monitor and analyze AI content for better outcomes. Thankfully, OpenAI has already announced new security considerations while launching plugins from its users and developers.

Risk 2: Data Compliance and Privacy

Would an user ever know how their data is used in machine learning training models? Is there a way to add consent-based data collection for developing standalone tools and plugins for GAIs?

The hottest new programming platform is the napkin.”

– Paul Daugherty, Accenture Group Chief Executive
& Chief Technology Officer
[Referring to the use of OpenAI to generate a working website from a napkin drawing]

Apparently, unless AI companies are able to answer these questions with a prudent “YES”, there would always be risks associated with data compliance and privacy management.

Generative AI training models ingest data from the internet and proprietary channels. There is lot of speculation about the way data is collected and used to train these GAIs and LLMs for specific business use cases. Questions related to privacy rights and data compliance are on top of everyone’s mind currently working with ChatGPT-like platforms, where prompt responses are based on internet data. There is no clarity about the policies, guidelines, documentations and practices necessary for the prevention of data-related compliance and governance.

Before marching ahead with your generative AI strategy, you must align with all the existing data privacy laws.

This includes meeting the compliance guidelines of the newly announced EU AI Act. Concurrent to the GDPR Law, the EU AI Act is a global standard for AI developers and companies that create, market, use and influence the adoption of AI and machine learning algorithms for facial recognition, biometric matching, digital data forensics and customer behavior analysis.

As a concrete measure to prevent risks from affecting your GAI development and adoption plans, you must consider all ethical frameworks that bring humans, data and machines together under one roof, or technically speaking, inside one generative AI code.

Risk 3: Intellectual Property Rights

Generative AI technologies have already entered a wide range of industries. It is impossible to think of a future without ChatGPT-like tools in banking, insurance, customer service, automotive, life science and education.

In fact, according to Accenture, banking and insurance, software development and capital markets would be majorly transformed by GAIs and LLMs. While competitive product innovation would be the new normal in the GAI space, it would be interesting to note how these would be used to hold stakes while discussing the intellectual property rights of original artists and authors. When data training is built on the premise of content supplied from sources containing “original work” of artists, there is a serious need to provide references and cite these authors / artists in the prompt responses. This would dispel the risks linked to IP rights.

But, is this really happening or even spoken about?

This report refutes the claims made about generative AI stealing the credits from original artists, and any AI generated content should be considered as an art of theft. Existing IP rights have limitations and the GAIs have found a way to surf past these limitations when they use copyrighted works for generating new digital content using images, music, videos, and codes. If GAI developers eventually decide to compensate every artist for their AI-generated results, the final corpus of value would be very minuscule to even consider. Why?

A GAI like Stable Diffusion uses more than 600 million images to create aesthetically improved images; LaMDA builds its repository using 1.65 trillion words scrapped from the web. If all the artists owning these works came to know about the “theft”, “piracy” or “inspiration” in developing AI-generated content, it would be too much of a mess. But ethically speaking, GAI strategy should be built on improving IP rights and credited every data point that were used in the creation.

If businesses start owning and marketing AI-generated content and solutions as their own, it could become a larger risk for sustenance and brand reputation. To avoid these risks, IP regulators should take control of the AI scenario and empower all stakeholders in the ecosystem to appropriately compensate for every art of work used in developing the content.

Risk 4: Bias and Discrimination and Abuse

AI results can be biased! And research proves these biases can skew decision-making and elevate risks.

Who should be blamed for the bias, discriminatory and abusive behavior of a malfunctioning Generative AI? Should it be the original data set that these GAIs were built on, or should it be the final product design team that should be held liable for creating and approving a biased AI content generator?

The cost of dealing with a biased Generative AI goes beyond mere monetary implications. These results can have irreversible impact on the societal and economic values that the current norms are built on. The biases could be highly prevalent in the healthcare and education sectors, where the cost of biased AI algorithms could be highest.

Accenture’s report highlights the risks of going bang-bang with generative AI tools that haven’t been tested for biases and discrimination before taking the final product to the market. Fixing the liability would become the biggest barrier in taking Generative AI tools to commercial heights.

AI experts could come together in demanding transparency in how the GAIs were trained in the first place. The organizations that use GAI could submit an online disclosure on web and apps for public awareness about the pertinent bias and discrimination.

As biases in AI algorithms emerge from black-boxes, it is important to yield powerful machine learning techniques to combat the trickle-effect of these risks. It is important to add “equity and inclusion” in every AI team to diversify the perspectives that goes into building a technically sound and compliant, bias-free AI generating platform.

This is what OpenAI is doing in removing bias from their algorithms

Building ChatGPT diagram

Conclusion: Augmented intelligence is the future of Gen AI

All the risks we have identified above are bad for business and customers, particularly bias. Good thing is these risks are manageable — because these risks are created from data, and data can be filtered for prevention of abuse and bias. The role of data science teams would be central to the whole idea of scrapping risks in generative AI.

How to achieve a r******** AI ecosystem?

Data science teams have to harness the power of AI to mitigate the risks emerging from Generative AI.

Not artificial intelligence, but AUGMENTED INTELLIGENCE.

Augmented intelligence, assisting generative AI strategies could prevent conscious and unconscious biases from seeping into the data sets. A business-friendly generative AI strategy would sustain on the basis of weights, check and balances, opening new frontiers for the human race to do so much more with all the data currently available on the web,  apps and platforms. Legislation and calibrated incentives to organizations that use generative AI ethically and responsibly would emerge as the new champions of mankind in the coming months. Which company would steal the limelight in doing so first and for long term — we will keep a pulse on this.

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

Infor Targets Business Velocity with Added Process Mining, ESG and GenAI Capabilities

PR Newswire

Logz.io Launches App 360, Revolutionizing Application Performance Monitoring

Cision PRWeb

Rokt Partners with Lyft to Offer Riders More Relevant Experiences

PR Newswire