Developers now have more LLMs to choose from when iterating and building production-ready RAG applications
Elastic announced support for Amazon Bedrock-hosted models in Elasticsearch Open Inference API and Playground. Developers now have the flexibility to choose any large language model (LLM) available on Amazon Bedrock to build production-ready RAG applications.
Also Read: Dynata Announces Appointment of SVP of IT Infrastructure and Enterprise Applications
“Our latest integration with Amazon Bedrock continues our focus on making it easier for AWS developers to build next-generation search experiences,” said Shay Banon, founder and chief technology officer at Elastic. “By leveraging Elasticsearch and Amazon Bedrock’s extensive model library, developers can deliver transformative conversational search.”
Developers using Elasticsearch and models hosted on Amazon Bedrock can now store and use embeddings, refine retrieval to ground answers with proprietary data and more. Amazon Bedrock models are also available in the low-code playground experience, giving developers more choice when A/B testing LLMs.
Also Read: NetSecurity Corporation Appoints Industry Veterans to Drive Expansion and Innovation
Support for Amazon Bedrock is available today, read the Inference API and Playground blogs to get started.
Elastic , the Search AI Company, enables everyone to find the answers they need in real-time using all their data, at scale. Elastic’s solutions for search, observability and security are built on the Elastic Search AI Platform, the development platform used by thousands of companies, including more than 50% of the Fortune 500.
Elastic and associated marks are trademarks or registered trademarks of Elastic N.V. and its subsidiaries. All other company and product names may be trademarks of their respective owners.
Also Read: Nokia to Deliver Core Network and Managed Services to Norlys Using Red Hat OpenShift
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]