Sinequa customers gain unprecedented search relevance, accuracy at scale, with the industry’s most comprehensive native neural search for enterprise use
Enterprise search leader Sinequa announced the addition of advanced neural search capabilities to its Search Cloud Platform, providing unprecedented relevance and accuracy – at scale – to enterprise users.
“Quality and breadth of information retrieval and search have long been recognized as primary drivers of productivity, but relevance is key to enabling business insights and more informed decision-making”
As an optional capability of Sinequa’s Search Cloud platform, Neural Search is the first commercially available solution to use four deep learning language models. These models are pre-trained and ready to use in combination with Sinequa’s advanced Natural Language Processing (NLP) and semantic search for the best relevance and question-answering capability, optimized to run efficiently even at scale.
Latest ITechnology News: Zoom Expands Developer Platform with Launch of Zoom Apps SDK
“Quality and breadth of information retrieval and search have long been recognized as primary drivers of productivity, but relevance is key to enabling business insights and more informed decision-making,” said Alexandre Bilger, President and CEO of Sinequa. “With Sinequa’s Neural Search capabilities, we’ve added best-in-class neural search to our existing best-in-class statistical search. This enhances our NLP, already refined through a decade of experience serving world-class brands and innovation leaders, with four state-of-the-art deep learning language models and the elasticity of the cloud. The resulting solution is the most powerful AI available for in-enterprise search.”
Neural search models have been used in internet searches by Google and Bing since 2019, but computing requirements rendered them too costly and slow for most enterprises, especially at production scale. Sinequa optimized the models and collaborated with the Microsoft Azure and NVIDIA AI/ML teams to deliver a high performance, cost-efficient infrastructure to support intensive Neural Search workloads without a huge carbon footprint. Neural Search is optimized for Microsoft Azure and the latest NVIDIA A10 or A100 Tensor Core GPUs to efficiently process large amounts of unstructured data as well as user queries.
Latest ITechnology News: Aqua Security Collaborates with CIS to Create the First Guide for Software Supply Chain Security
Sinequa’s Neural Search improves relevance and is often able to directly answer natural language questions. It does this with deep neural nets that go beyond word-based search to better leverage meaning and context. Sinequa’s Search Cloud platform combines neural search with its extensive NLP and statistical search. This unified approach provides more accurate and comprehensive search results across a broader range of content and use cases.
Sinequa’s four deep learning models are trained for specific tasks and work in concert for the best possible relevance for any enterprise scenario. All four models are fully pre-trained, configured and optimized for enterprise content. This eliminates the laborious and costly process of tagging large training sets, training custom models, and updating them over time. With Sinequa’s Neural Search, organizations can now deploy intelligent search-based solutions with cutting-edge deep learning technology quickly and easily.
“Sinequa is differentiating itself through its use of deep learning (artificial neural networks), by applying multiple deep learning models to provide more accurate and relevant search results,” writes Alan Pelz-Sharpe of Deep Analysis in a recent report.
Latest ITechnology News: Alation Joins Hewlett Packard Pathfinder Portfolio
[To share your insights with us, please write to sghosh@martechseries.com]