Menu Close

I recently started my first position as a data scientist. On my first day on the job, I headed to the client site, armed with my repertoire of pre-processing modules, classification algorithms, regression methods, deep learning approaches, and evaluation techniques. I was ready for whatever this organization threw at me – I expected that I could solve their problems with some simpler models and a few data cleansing steps, much more straightforward than what I faced in my master’s program.

Boy, was I wrong.

The first problem handed to me was one they had been wrestling with for a few years now. They have a set of documents written by hundreds of different authors. These documents need to be tagged with specific metadata before they are stored, to make them searchable and accessible. Currently, this task is being carried out manually, with a different team of people reading each document and applying the tags. This process costs approximately 55 thousand labor hours across the sub-organization. Other teams are performing this same task on other document types in at least three different sub-organizations. That’s a lot of hours.

So, create an algorithm that can perform this tagging automatically. Great, I think to myself, as I load up spaCy and NLTK. Easy-peasy. As I start digging into the business logic and data behind the problem, I learn that one of the top priority metadata categories is the topic. There are 26 highly industry-specific topics, of which a document can have many. Okay, switch modes from entity extraction/tagging to text classification using NLP. Okay, I am still making progress. Now to find some training data and see the spread of these topics.

As I am requesting access to data, I realize the reason this problem has not yet been solved. Of the hundreds of thousands of documents at my disposal, at least 50% of the tagging is incorrect. There is a small subset of manually curated, correctly tagged documents. And by small subset, I mean 87 documents.

There goes any chance of supervised learning. I struggle to wrap my mind around the fact that tons of people are spending tons of hours to manually tag documents with incorrect tags. At this point, I also realize that unsupervised modeling is not an option because these topics are not necessarily intuitive or generic. They are very industry-specific, and there are more apparent features of the text to cluster on, such as country or region.

Determined not to become stumped with the first real-world problem thrown my way, I turned to Google. I knew this was a topic modeling problem – I needed to sort the documents into 26 different topics based on their textual content. One of the most common topic modeling algorithms is Latent Dirichlet Allocation (LDA), which maps documents to topics, represented by a to-be-determined set of words. For a great intro into LDA and topic modeling, see the site in the references below. The only LDA applications I had ever worked with, however, were completely unsupervised. For the reasons stated above, this was not going to work for my specific use case.

A coworker had suggested using seeded LDA as a semi-supervised approach to this problem and thus began my search. I came across a blog post written by the creators of the GuidedLDA python algorithm that explained how they took LDA and seeded the topics with key terms to encourage the model to converge around their specific topics (rather than let the model choose the words for each topic). This approach can be useful when you have very precise topics, as was the case with my problem.

I excitedly loaded up the library. Applying this method to my data, I reached a whopping 16% accuracy against my test set. Needless to say, I was a bit discouraged. Back to Google. I found another similar approach to this semi-supervised topic modeling problem with CorEx (correlation explanation). This library had the option of supplying anchor words to the algorithm, encouraging the model to converge around my enumerated topics similar to the GuidedLDA model. I won’t get into the dirty details outlining the differences between the two models (see the references below), because, in the beginning, it didn’t matter for me. The initial accuracy of the testing data with CorEx was 14%.

After a few days of continuous searching and exploration, I realized that for the realm of semi-supervised topic modeling, this was pretty much it. My two options were GuidedLDA and CorEx. I was able to up my accuracy in a few, perhaps obvious ways. Stratified sampling was huge for me. Taking 100 records from each topic to create an evenly distributed training data set increased my accuracy by at least 10%. Keep in mind, this stratified sample is taken from the incorrectly tagged data repository, but it was the best I could do. Additionally, I eventually gained access to the definitions of each topic, allowing me to use term-frequency matrices to extract key terms per topic that I could then feed into these topic modeling algorithms. These changes bumped my accuracy another 10-15%.

Other hyperparameter tuning tasks such as adjusting my processed text for industry stop words, selecting the token length, limiting minimum and maximum document frequencies, and finding the ideal threshold for seed word confidence landed me with 53% accuracy on the test data set. Congratulations, I thought to myself, I am now slightly better than the paid human taggers.

So, what’s next? How else can I increase the accuracy of this algorithm with the data I have available? There are a few other hyperparameters to tune (alpha, beta, etc.), and I can always gather a larger quasi-stratified sample to throw at the model (keeping in mind that I can’t be sure exactly how evenly distributed it is). Some other ideas that have cropped up are leveraging a concept ontology (or word embedding) to enhance the depth of my seed words, synthetically duplicating the curated documents to increase the size of the test set (to make a training set for supervised learning), or applying transfer learning from a large, external corpus and hope that the topics align with the internal business topics. And, of course, there’s the world of deep learning.

There is the obvious choice. I could always ask for real, usable data. But, as I’m starting to learn, that isn’t always an option. So, while I plan to show up to the weekly meeting and lobby for better data for a sixth time, I will continue to work with the data that I have. The group is very excited that my model can outperform the hundreds of people they pay to do this job, but having a model that is only correct half of the time is the same as having a model that is wrong almost half of the time. Well, I guess it is time for me to get back to the drawing board.

 

References:

LDA:

https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-latent-dirichlet-allocation-437c81220158

GuidedLDA

https://www.freecodecamp.org/news/how-we-changed-unsupervised-lda-to-semi-supervised-guidedlda-e36a95f3a164/

https://medium.com/analytics-vidhya/how-i-tackled-a-real-world-problem-with-guidedlda-55ee803a6f0d

CorEx Topic Modeling

https://github.com/gregversteeg/corex_topic

https://github.com/gregversteeg/corex_topic/blob/master/corextopic/example/corex_topic_example.ipynb

Posted in Blog, Data Science

5 Comments

  1. Srujan

    Hey Samantha, good to see another data scientist escaping the bullet and surviving! i am struggling with this exact same problem at present. And my team has tried guidedLDA and labelledLDA but with low luck(read accuracy). Did you find anything better?
    If you haven’t tried yet then you can look at LabelledLDA

    • Samantha Hamilton

      Hi Srujan,

      Glad to hear I’m not the only one struggling with this issue. I have not been able to revisit the problem since the start of coronavirus, but I had experimented with LabeledLDA previously. I found that LabeledLDA performed significantly worse than GuidedLDA, regardless of what hyperparameter tuning or data set stratification I performed. After a few tries, I decided to stick with GuidedLDA and CorEx as my algorithms to spend more time exploring.

      With further experimentation with hyperparameters, data set size/stratification, I found that CorEx almost always outperformed GuidedLDA. While I continued to iterate with both of them as I got more data and more business knowledge of the problem, I found that CorEx always did better. If your group has not yet tried CorEx, that would be my main suggestion.

      Other things I plan to try with my data set (other than continue to lobby for more, better data) are hierarchical agglomerative clustering, multiple individual binary classifiers, and a series of hierarchical classifiers (we have learned that certain topics are linked to certain countries, which we have been able to tag with >90% accuracy in these same documents). The most promise seems to be using business knowledge to narrow down the number of possible topics (from 26 to 10 or 11) and then attempt the classification.

      Hope this helps!

  2. Srujan

    Thank you so much Samantha for your response. We have started trying out hierarchical, will drop a note if this goes positive so that others too can benefit. Will definitely try CorEx now.

    We are trying this now in gudedLDA – fit the algo using only one topic (manually tagged documents) and forcing the LDA to identify it as a single topic. Then extract the seed from the top weighted words and then supplying that as a seed for this topic/intent in guided LDA.
    By the way, we are dealing with intent identification for questions asked by customers in a QnA forum/Chat groups.

    Cheers!

  3. Ruben Partouche

    Hi Samantha !

    First thanks a lot for your article, it has been really helpful to me (it’s my very first professional experience in data science).

    I’m currently working on a very similar issue, I have a set of large documents containing a few specific topics (environmental issues like biodiversity destruction, climate change, etc…) the only difference being that my data is not labeled at all (currently working on labeling a sufficient amount).

    I’m trying to apply the lessons you learned on my data and I found that balancing my training set is not as easy as it sounds since there can be multiple labels for each document. I realized that balancing a dataset for a multilabel problem is in itself a machine learning task. Did you choose to select 100 records of each topic regardless of the other labels there could be on each record or did you use a more sophisticated approach ?

    Thanks !

    • Samantha Hamilton

      Hello,

      I am happy to hear that my article is assisting you in your problem! The ability to correctly tag documents with industry-specific topics is needed in almost every sector, and it is hard in just about every sector as well.

      Great question! So each document was tagged (albeit potentially incorrectly) with multiple tags. It ranged from 2 to 10 tags per document. When I was attempting my stratified sampling (100 records per topic), I selected documents that had only 2 tags. My assumption here was that if a document only had two topics, it was going to be more specific for the topic.

      For example, I can talk at a high level about science and politics and sports, but if I’m only talking about science, then I’m more likely to use topic-specific words more frequently. This would help bump the relative frequency of topic-specific terms, helping my model learn more clearly.

      Using that logic, I selected 100 records per topic, where each document had two topics. I also ensured that I was doing sampling without replacement, so there was no possibility that the model was learning the same subset of frequent terms for different topics.

      This method may be challenging for you since you are self-tagging a lot of your own data. Another approach that we employed was to run the top terms per topic by SMEs, as well as running the false-positive results for each topic by SMEs. This enabled them to point out why the model may have been missing specific topics, and we were able to manually adjust the model’s learning.

      Hope this helps, and good luck with your problem!

Leave a Reply

Your email address will not be published. Required fields are marked *