How to build integrations that power enterprise AI search 

Every employee has questions that require a quick and concise response, whether the questions are specific to their role or not.

For example, an account executive might have questions about the latest sales commission model; at the same time, they may have questions about the company’s PTO policy.

Enterprise AI search can help the rep (and any employee) in either type of scenario.

We’ll go deeper on how enterprise AI search can work by reviewing several real-world examples. We’ll also break down how integration data can power enterprise AI search functionality. 

But to get started, let’s align on what, exactly, enterprise AI search means.

{{this-blog-only-cta}}

What is enterprise AI search?

It's any search functionality in an application that allows employees to ask questions and receive answers. In addition, the search functionality uses a machine learning model that can understand the query’s intent and use integrated customer data to generate relevant output.

This output can be displayed using both plain text and links with additional context on the answer.

Screenshot of Assembly's AI feature, "Dora AI"
Assembly, an HR platform, offers an enterprise AI search that generates output with plain text and a link(s). The latter allows searchers to dig deeper on the answer with ease

How enterprise AI search works

While enterprise AI search’s functionality can vary across providers, it typically works as follows under the hood:

1. A user submits a query in your enterprise search bar.

2. That query gets embedded—or turned into a vector. In other words, the query gets transformed from plain text into a string of numbers.

3. The embedded query is compared to existing vectors in the database via similarity metrics (e.g., cosine similarity) to find the closest matches. The relevant documents and/or data identified from the database are then retrieved. 

4. The LLM collects both the embedded query and the relevant context (i.e., the documents and/or data retrieved).

5. The LLM then generates an output for the user via the embedded query and context.

How enterprise AI search works

Note: The workflow above uses a technique commonly referred to as retrieval augmented generation (RAG). 

Examples of enterprise AI search

To help make enterprise AI search more tangible, let’s break down a few real-world examples.

Notion

The productivity app helps users build internal wikis, organize and track projects, and—using its enterprise AI search—find the answers they need to execute on their work.

Since the platform collects a comprehensive set of data within an organization and can integrate with customers’ file storage systems and ingest the files’ contents, its enterprise AI search (dubbed Notion AI) can answer nearly any question your employees have. This can be anything from the content marketing team’s strategy in 2025 to the company’s current office locations to the product roadmap plans in the coming quarter.

Moreover, within the output’s plain text, Notion AI uses citations to help readers learn more and to give them confidence in the answer’s accuracy. 

Screenshot of Notoin AI's output
Notion AI users can hover over the accompanying numbers to find Notion docs that support its claims. Users can also find relevant files below the plain text

https://www.merge.dev/blog/rag-examples?blog-related=image

Guru

Guru’s enterprise AI search—which is the company’s core offering—relies on more integrated data than the contents stored within the app.

Guru integrates with customers’ file storage systems, CRMs, ticketing platforms, and HRIS solutions. Once connected, they can feed the data from all of these systems to the machine learning (ML) they use, allowing the model to perform RAG for all kinds of queries across their customers' user base.

For example, it can still answer questions that may be documented within Guru, like the location of a team onsite.

Screenshot of Gurus' enterprise AI search

But queries that require information in other systems—e.g., figuring out how to resolve a particular type of issue for a customer—will lead Guru to rely on the customers' integrated systems of record.

Screenshot of Gurus' enterprise AI search

How to build enterprise AI search functionality

As our examples show, every enterprise AI search needs customer-facing integrations (i.e., product integrations) to power its outputs.

Otherwise, you risk limiting the search to information stored natively within the app. And, assuming your platform isn’t the system of record across teams and processes, this would curb the search’s value.

With this in mind, here’s how you can build the appropriate set of product integrations for your enterprise AI search:

1. Identify current and potential queries

If your enterprise AI search functionality is already built and in use, this exercise is easy: Analyze the searches currently taking place.

More likely, you’re still building your enterprise AI search and/or want to expand its value across teams. If you fall into this bucket, you’ll need to spend the time necessary to pinpoint the most relevant queries, such as talking to your ideal customer profiles (both internally and with customer and prospect accounts) on how they’d use the enterprise search day to day.

2. Map the current and/or forecasted queries to the apps that store the relevant data

Once you know the types of searches your users will make, you can pinpoint the applications you’ll need to integrate with. 

Here’s how this can work:

  • If your users need to access data on sales opportunities, you’ll need integrate with customers’ CRMs
  • If your users need to access data related to customer issues, you'll need to integrate with customers’ ticketing systems
  • If your users need to access data on their employer’s policies and procedures, you’ll need to integrate with HRIS solutions
  • And if your users need to access any of the above (and more), you’ll need to integrate with file storage solutions as the documents and files stored in these solutions offer a wide range of information.

3. Build integrations with those applications

Once you know the applications you need to integrate with your enterprise AI search, you’ll need to decide how to build those integrations. 

This essentially leaves you with 3 options:

1. An embedded iPaaS solution, which lets you build one integration at a time with your product as well as implemented workflow automations that work across these systems.

Embedded iPaaS vendors
A snapshot of the embedded iPaaS solutions you can evaluate and choose from

2. A unified API solution, which lets you add hundreds of integrations to your product through a single integration build. These integrations can also span several categories, from HRISs to file storage platforms to CRMs.

Unified API vendors
A snapshot of the unified API solutions you can evaluate and choose from

3. Native integrations, or simply tasking your engineers with both building and maintaining your product integrations over time.

https://www.merge.dev/blog/embedded-ipaas-vs-unified-api?blog-related=image

Build all the integrations your enterprise AI search needs with Merge

Merge, the leading unified API platform, is uniquely positioned to support your enterprise AI search for a number of reasons:

  • Integration coverage: Merge offers 200+ integrations across several categories—including file storage, CRM, HRIS, accounting, ATS, and ticketing—allowing you to integrate all the systems your customers need
  • Data normalization: Merge normalizes all of your customers’ data into its Common Models, or its normalized data models, before adding it to your vector database. This gives the LLM clear and consistent inputs, which allow it to generate more reliable outputs
  • Advanced syncing features: Merge offers several features that let you access data beyond its Common Models (e.g., Field Mapping), which let you feed additional data to your vector database. The LLM you use can then leverage an increased level of context to answer more queries and/or provide more specificity in its outputs
  • Access control lists (ACLs): Merge uses ACLs across integrations to ensure that the AI enterprise search only generates outputs that fall under the searchers'  levels of permissions. For example, if a user asks what a colleague’s salary is and the user doesn’t have admin-level access to the integrated HRIS, they’d receive a message that says they don’t have access to that information
  • Enterprise-grade data security: Merge has not only built enterprise-grade features and functionality to keep your customers’ documents and data safe—like ACL, Scopes, and bring your own key—but also complies with GDPR, SOC 2 Type 2, ISO 27001, and other security measures and regulations

{{this-blog-only-cta}}

Enterprise AI search FAQ

In case you have more questions on enterprise AI search, we’ve addressed several common ones below:

What are some popular enterprise AI search platforms? 

Widely-used solutions include Guru, Simpplr, Glean, and Aisera. 

Before choosing any, you should review the integrations they offer, the large language model(s) they use, and their search's' level of interactivity. For example, some may only let you retrieve information, while others can let you make requests and have the LLM perform actions on your behalf (i.e., it’s agentic). 

What capabilities should enterprise AI search offer?

Enterprise AI search should support the following functionality: 

  • Concise outputs with links to sources: It leverages natural language processing (NLP) to understand user intent and interpret semantic relationships between words. This, paired with a large language model(s), allows the enterprise AI search to generate clear and concise summaries—often just a few sentences long—that include links to sources in case the searcher wants to learn more
  • Comprehensive integration coverage: They should let customers integrate with a wide range of solutions within and across popular categories—from file storage to ticketing to CRM to HRIS. This broad coverage helps the LLM retrieve the latest information for any potential query 
  • Access level control (ACL): The enterprise AI search should only share information that falls within the users’ levels of permissions. For example, an entry-level sales rep shouldn’t be able to retrieve information from a board deck, as they don’t have the permissions to access it in the integrated file storage solution
  • Personalized outputs: The enterprise AI search can also have context on the user making the query, which can help the LLM generate more relevant and helpful outputs

What are the benefits of using enterprise AI search?

Some of the benefits include:

  • Time savings: Employees don’t have to sift through several applications to find information. They can simply make a search and get the information they need
  • Employee experience: Allowing employees to spend less time looking for information lets them focus more on the work they enjoy and can help them work fewer hours
  • Productivity gains: In addition to answering basic questions, enterprise AI search can complete tedious and complex tasks (e.g., putting together an analysis of the top customer issues from last quarter). This can help employees make more informed business decisions, faster
  • Cross-functional alignment: By providing a single-source of truth for company information, employees can more easily align on critical areas, such as the current budget for a specific company initiative
Email Updates

Subscribe to the Merge Blog

Get stories from Merge straight to your inbox