CIO Influence
CIO Influence News Cloud Machine Learning

Arize AI Unveils Prompt Engineering and Retrieval Tracing Workflows For LLM Troubleshooting

Arize AI Unveils Prompt Engineering and Retrieval Tracing Workflows For LLM Troubleshooting

Arize AI, a market leader in machine learning observability, debuted industry-first capabilities for troubleshooting large language models (LLMs) at Google Cloud Next ’23 today.

Arize’s new prompt engineering workflows, including a new prompt playground, enables teams to find prompt templates that need to be improved, iterate on them in real time, and verify improved LLM outputs.

CIO INFLUENCE: CIO Influence Interview with Russ Ernst, Chief Technology Officer at Blancco

Prompt analysis is an important component in troubleshooting an LLM’s performance. Often, LLM performance can be improved simply by testing different prompt templates, or iterating on one to achieve better responses.

With these new workflows, teams can:

  • Uncover responses with poor user feedback or evaluation scores
  • Identify the template associated with poor responses
  • Iterate on the existing prompt template
  • Compare responses across prompt templates in a prompt playground

Arize is also launching additional search and retrieval workflows to help teams using retrieval augmented generation (RAG) troubleshoot where and how the retrieval needs to be improved. These new workflows will help teams identify where they may need to add additional context into their knowledge base (or vector database), when the retrieval didn’t retrieve the most relevant information, and ultimately understand why their LLM may have hallucinated or generated sub-optimal responses.

CIO INFLUENCE: CIO Influence Interview with Lior Yaari, CEO and Co-Founder at Grip Security

“Building LLM-powered systems that responsibly work in the real-world is still too difficult today,” said Aparna Dhinakaran, Co-Founder and Chief Product Officer of Arize. “These industry-first prompt engineering and RAG workflows will help teams get to value and resolve issues faster, ultimately improving outcomes and proving the value of generative AI and foundation models across industries.”

CIO INFLUENCE: CIO Influence Interview with Bill Lobig, VP of Product Management at IBM Automation

[To share your insights with us, please write to sghosh@martechseries.com]

Related posts

Zapata Computing Publishes New Research on Using Orquestra Platform to Implement Fundamental Subroutine for Quantum Algorithms

CIO Influence News Desk

ibi Unleashes the Power of Legacy Systems with Open Data Hub for Mainframe

Business Wire

Ansys Multiphysics Solutions Achieve Certification for TSMC’s N3E and N4P Process Technologies