IBM Watson™ Discovery Service Ideas

We've moved...

You'll be redirected shortly, we've moved to our new idea portal:

Leverage enrichments during NLQ or keyword search

Customer : IBM  Teacher Advisor with Watson


Use-Case : As a teacher, I want to be able to search using NL be able to get the right lessons and activities for my K-5 students.


Description : TA with Watson team has a custom WKS model that is trained on the education content for K-5 focused on Math curriculum.  They want to be able to use Watson Discovery Service to provide relevant educational content for utterances like "I need to know what tactile activities are available for fractions".  With an enriched Discovery collection, there is no way to get a matching passage like this "XX Activities are good for kids that can use physical activity YYYY...." because there is no semantic search being applied during search. The alternative is to use "Filters" in the query which require the application or the user to provide that ahead of time. Alternatively, they can run NLU on the same custom model to get the entities and run a filtered search. Either way, it is more work on part of the query building exercise. 


Need : Just like the DeepQA factoid pipeline Primary Search components where the constituents of a query (LAT, Focus, Entities, Relations) are automatically considered while performing search. This yielded far better results in the first place. 

  • Guest
  • Jan 25 2018
  • Planned
Why is it useful?
Who would benefit from this IDEA? As a teacher, I want to be able to use NLQ to get the relevant results without qualifying my search using filters
How should it work?
Idea Priority
Priority Justification
Customer Name
Submitting Organization
Submitter Tags
  • Attach files
    February 07, 2018 02:09

    I think we'd get some good benefit out of applying the WKS model to the query and having the annotations found influence the results.  Currently the WKS model is producing metadata that can be filtered on, but even if I use NLU to apply the model to the query, I don't want to exclude results that don't include the entities and relationships detected in the natural language question, I just want to effect the sorting/ranking/relevance.

    February 07, 2018 02:15

    My interest in this feature is prompted by my customer (UBank) who are struggling to see the benefits of a WKS model that understands their products and work instructions, when it's not really used by the Natural Language Query feature to find the most relevant document.

  • Markus Graulich commented
    February 20, 2018 09:06

    Fully agree, without Discovery leveraging a WKS custom model in a NLQ it is hard to justify the labor intensive work on a WKS custom model.

    March 09, 2018 03:09

    An update, based on my testing on my customer's project I've been able to show some success at making use of the WKS types and subtypes in my collection.


    What I did was use Query Expansion to add my WKS entity subtypes to the natural language query.  For example:

        "expansions": [{
                "input_terms": [
                    "ball", "rocking horse", "yoyo"
                "expanded_terms": [
            }, {
                "input_terms": [
                    "apple", "banana", "finger lime"
                "expanded_terms": [

    Where TOY and FRUIT were entity subtypes of PRODUCT in my WKS model.

    When I added these query expansions to my collection that had my WKS model applied, I found that searching on a natural language query like this:

    "How can I purchase a rocking horse"

    Would also find documents talking about purchasing that had the TOY entity subtype listed against them.


    In my testing I could see definite improvements using this method, with 30% more relevant documents found in the top 5 returned results against my test question set.  I expect that we'd see the same or better improvements if this Aha! idea was implemented.

  • Markus Graulich commented
    March 09, 2018 10:43

    The way I understand QueryExpansion is that it is a 'computationally expensive' way to replace some entities in  the NLQ query with entity values that show up in the Discovery document. Obviously then it finds the passage, Are you really sure this was due to the WKS model?? And it is limited to 500 expansions (probably due to performanc limitations)

    March 20, 2018 22:54

    Yes, quite sure that it's the WKS model that is making the difference, based on my testing.  No comment from me on how computationally expensive it is to make use of Query Expansion.

  • Alessandro Vignazia commented
    December 12, 2018 14:16

    I have seen the use of the enrichment filters across entities, concepts, etc. that could be applied to delimit a text search via discovery as detailed in the asset ""....our client is asking us for a chatbot based "search" ...and currently we are investigating using this type of filtering mechanism but in the form of more elegant (we hope) chatbot (assistant) like "cards" for user selection to delimit the search results QUESTION: I note this item has "planned" status what exactly is planned please? AND I don't understand why the comments below indicate you cannot use a custom WKS model for entity extraction within Discovery and use the approach we are suggesting...please advise? I do realise our approach is rather labourious and it would be great to have some kind of filtering on enrichments and any other type of search "disambiguation (similar to what assistant is now doing for intents" built into the standard discovery offerings?