Blog

Enhancing Feedback Management Leveraging AI

In today's fast-paced business environment, suggestions and feedback from workers in the field can be a critical source of valuable information. These inputs can uncover issues that may be invisible to standard inspections, and enable swift reactions, since problems can be identified and addressed immediately. However, the process of managing these suggestions and complaints can be overwhelming, particularly for larger companies. Disorganization can lead to inefficiencies, with multiple teams working on the same issues, and resolved issues being re-investigated.

To overcome these challenges, companies often appoint Status Managers whose job is to organize suggestions into Issues, redirect them to the relevant team, and track progress. However, manual reorganization of complaints can become unviable in larger organizations. 

To address the issue of disorganized inputs and reduce the workload of managers, we have developed a cutting-edge system that streamlines the process of issue creation. Our system employs sophisticated algorithms to automatically suggest related complaints during the filing of an issue, thereby reducing the number of suggestions that managers need to compare manually. Additionally, our system leverages advanced techniques to predict the department to which an issue should be redirected. The result is a semi-automatic issue creation process that is highly efficient and effective, freeing up managers to focus on more strategic tasks.

Bi-Encoder Models

To compare complaints automatically, we employ a Bi-Encoder Semantic Textual Similarity (STS) model. Semantic Textual Similarity models aim to compare two given sentences semantically, giving high scores to sentences with similar meaning and low scores to sentences with different meanings. To achieve this, two approaches exist: Cross-Encoders and Bi-Encoders. Cross-Encoders simply take both sentences as inputs and output one score for the pair. If you wish to compare each sentence with each other, you must run the model for each possible permutation. On the other hand, Bi-Encoder models take in one sentence and output one embedding. The model is trained to output embeddings that can be easily compared using a predetermined function. For example, the model we use is trained to generate embeddings that can be compared using Cosine Similarity, where the Cosine Similarity of the two embeddings correlate to the semantic similarity of the texts they belong to. [1] This allows us to compare millions of complaints with each other extremely quickly.

Multilingualism through Distillation

When training a Multilingual model, especially a Bi-Encoder model, one issue that arises is Vector Space Misalignment. This issue occurs if the model is trained on completely separate STS datasets for each language: as there are no data points creating interlingual connections, the spaces for different languages aren’t guaranteed to align with each other. In this case, even though the positioning of each embedding for one language makes sense within the language, the property doesn’t hold between embeddings of different languages. To avoid this, as described in [2], we can expand a Monolingual model into a Multilingual one instead of trying to train a Multilingual model directly. In this approach, a Monolingual model is first trained for the Bi-Encoder Semantic Textual Similarity task, then Knowledge Distillation is used to expand the model to other languages. This is done by using the monolingual model as a teacher for a student multilingual model. This technique ensures that the embeddings for the same text in different languages align, meaning that the model will still give similar embeddings even if the texts being compared come from different languages.

Cross-Lingual Topic Classification

Using the embeddings generated by the Semantic Textual Similarity model, we train a small neural network on top to easily discern the topic of the text. This can be applied to every language the STS model supports, enabling cross-lingualism.

Conclusion

We believe the addition of an automatic suggestion classifier and semantic textual similarity model to the Suggestion Management System improves the efficiency of the system and greatly simplifies the work involved in collecting and processing workers’ input, making it more useful than ever.

References

  1. Reimers, Nils, and Iryna Gurevych. "Sentence-bert: Sentence embeddings using siamese bert-networks." arXiv preprint arXiv:1908.10084 (2019).
  2. Reimers, Nils, and Iryna Gurevych. "Making monolingual sentence embeddings multilingual using knowledge distillation." arXiv preprint arXiv:2004.09813 (2020).

Similar
Blog

Your mail has been sent successfully. You will be contacted as soon as possible.

Your message could not be delivered! Please try again later.