Extracting Metrics from text using TextDescriptives#

Open In Colab

DaCy allows you to use other packages in the spaCy universe as you normally would - just powered by the DaCy models.

The following tutorial shows you how to use DaCy and TextDescriptives to extract a variety of metrics from text. For more information on the metrics that can be extracted, see the TextDescriptives documentation.

Data#

In this tutorial we’ll use TextDescriptives and DaCy, to get a quick overview of the SMS Spam Collection Data Set. The dataset contains 5572 SMS messages categorized as ham or spam.

Note

The estute among you will have noticed that this dataset is not Danish. This tutorial simply want to show how to use DaCy and TextDescriptives together and hopefully inspire you to use the tools on your own (Danish) data.

To start, let’s load a dataset and get a bit familiar with it.

from textdescriptives.utils import load_sms_data

df = load_sms_data()
df.head()
label message
0 ham Go until jurong point, crazy.. Available only ...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup fina...
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives aro...
df["label"].value_counts()
label
ham     4825
spam     747
Name: count, dtype: int64

Adding TextDescriptives components to DaCy#

Adding TextDescriptives components to a DaCy pipeline, follows exactly the same procedure as for any spaCy model. Let’s add the readability and dependency_distance components. Readability is a component that calculates readability metrics, and dependency_distance is a component that calculates the average dependency distance between words in a sentence. This can be seen a measure of sentence complexity.

Because we are using a DaCy model, the dependency_distance component will use the dependency parser from DaCy for its calculations.

import dacy

nlp = dacy.load("small")  # load the latest version of the small model

nlp.add_pipe("textdescriptives/readability")
nlp.add_pipe("textdescriptives/dependency_distance")
Hide code cell output
/home/runner/.local/lib/python3.10/site-packages/spacy/util.py:910: UserWarning: [W095] Model 'da_dacy_small_trf' (0.2.0) was trained with spaCy v3.5.2 and may not be 100% compatible with the current version (3.7.2). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
  warnings.warn(warn_msg)
/home/runner/.local/lib/python3.10/site-packages/spacy/util.py:910: UserWarning: [W095] Model 'da_dacy_small_ner_fine_grained' (0.1.0) was trained with spaCy v3.5.0 and may not be 100% compatible with the current version (3.7.2). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
  warnings.warn(warn_msg)
/home/runner/.local/lib/python3.10/site-packages/spacy_transformers/layers/hf_shim.py:137: UserWarning: Error loading saved torch state_dict with strict=True, likely due to differences between 'transformers' versions. Attempting to load with strict=False as a fallback...

If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current 'transformers' and 'spacy-transformers' versions. For more details and available updates, run: python -m spacy validate
  warnings.warn(warn_msg)
<textdescriptives.components.dependency_distance.DependencyDistance at 0x7fe24d56cbe0>

From now on, whenever we pass a document through the pipeline (nlp), TextDescriptives will add readability and dependency distance metrics to the document.

Let’s load the data and pass it through the pipeline.

# to speed things up (especially on cpu) let's subsample the data
df = df.sample(500)

doc = nlp.pipe(df["message"])
import textdescriptives as td

# extract the metrics as a dataframe
metrics = td.extract_df(doc, include_text=False)
Hide code cell output
Token indices sequence length is longer than the specified maximum sequence length for this model (159 > 128). Running this sequence through the model will result in indexing errors
# join the metrics to the original dataframe
df = df.join(metrics, how="left")
df.head()
label message flesch_reading_ease flesch_kincaid_grade smog gunning_fog automated_readability_index coleman_liau_index lix rix ... sentence_length_median sentence_length_std syllables_per_token_mean syllables_per_token_median syllables_per_token_std n_tokens n_unique_tokens proportion_unique_tokens n_characters n_sentences
936 ham Since when, which side, any fever, any vomitin. NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2598 ham Okie... Thanx... NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3635 ham \ME 2 BABE I FEEL THE SAME LETS JUST 4GET ABOU... NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4345 ham You still around? I could use a half-8th NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
592 spam PRIVATE! Your 2003 Account Statement for 07753... NaN NaN NaN NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

5 rows × 28 columns

That’s it! Let’s do a bit of exploratory data analysis to get to know the data a bit more.

Exploratory Data Analysis#

With the metrics extracted, let’s do some quick exploratory data analysis to get a sense of the data. Let us start of by taking a look at the distribution of the readability metrics, lix.

import seaborn as sns

sns.boxplot(x="label", y="lix", data=df)
/home/runner/.local/lib/python3.10/site-packages/seaborn/categorical.py:640: FutureWarning: SeriesGroupBy.grouper is deprecated and will be removed in a future version of pandas.
  positions = grouped.grouper.result_index.to_numpy(dtype=float)
<Axes: xlabel='label', ylabel='lix'>
../_images/dbd6eff77ac28793a8cbadbf078e0bbad2aca7dc8864ed3912b87cef0e8fea12.png

Let’s run a quick test to see if any of our metrics correlate strongly with the label

# encode the label as a boolean
df["is_ham"] = df["label"] == "ham"
# compute the correlation between all metrics and the label
metrics_correlations = metrics.corrwith(df["is_ham"]).sort_values(
    key=abs, ascending=False
)
metrics_correlations[:10]
sentence_length_std           -0.303042
coleman_liau_index             0.212275
n_unique_tokens               -0.174772
token_length_mean              0.172780
syllables_per_token_median     0.169951
flesch_reading_ease           -0.166157
syllables_per_token_mean       0.165625
n_tokens                      -0.163977
dependency_distance_std       -0.163154
automated_readability_index    0.153635
dtype: float64

That’s some pretty high correlations! Notably we see that the mean dependency distance is correlated with ham. This makes sense, as the dependency distance is a measure of sentence complexity, and spam messages tend to be shorter and simpler.

Let’s try to plot it:

sns.kdeplot(df, x="dependency_distance_mean", hue="label", fill=True)
/home/runner/.local/lib/python3.10/site-packages/seaborn/_base.py:949: FutureWarning: When grouping with a length-1 list-like, you will need to pass a length-1 tuple to get_group in a future version of pandas. Pass `(name,)` instead of `name` to silence this warning.
  data_subset = grouped_data.get_group(pd_key)
/home/runner/.local/lib/python3.10/site-packages/seaborn/_base.py:949: FutureWarning: When grouping with a length-1 list-like, you will need to pass a length-1 tuple to get_group in a future version of pandas. Pass `(name,)` instead of `name` to silence this warning.
  data_subset = grouped_data.get_group(pd_key)
/home/runner/.local/lib/python3.10/site-packages/seaborn/_base.py:949: FutureWarning: When grouping with a length-1 list-like, you will need to pass a length-1 tuple to get_group in a future version of pandas. Pass `(name,)` instead of `name` to silence this warning.
  data_subset = grouped_data.get_group(pd_key)
<Axes: xlabel='dependency_distance_mean', ylabel='Density'>
../_images/51f23a1f5014ae0972b7430a4c140f161eecc64cd828c6e6257d148e8e8f4c8c.png

We can do a similar thing for the lix score, where we see that here isn’t a big difference between the two classes:

sns.kdeplot(df, x="lix", hue="label", fill=True)
/home/runner/.local/lib/python3.10/site-packages/seaborn/_base.py:949: FutureWarning: When grouping with a length-1 list-like, you will need to pass a length-1 tuple to get_group in a future version of pandas. Pass `(name,)` instead of `name` to silence this warning.
  data_subset = grouped_data.get_group(pd_key)
/home/runner/.local/lib/python3.10/site-packages/seaborn/_base.py:949: FutureWarning: When grouping with a length-1 list-like, you will need to pass a length-1 tuple to get_group in a future version of pandas. Pass `(name,)` instead of `name` to silence this warning.
  data_subset = grouped_data.get_group(pd_key)
/home/runner/.local/lib/python3.10/site-packages/seaborn/_base.py:949: FutureWarning: When grouping with a length-1 list-like, you will need to pass a length-1 tuple to get_group in a future version of pandas. Pass `(name,)` instead of `name` to silence this warning.
  data_subset = grouped_data.get_group(pd_key)
<Axes: xlabel='lix', ylabel='Density'>
../_images/78de1a22bf6c37db0d61a35ba47d7006a8d19d8339156c272349623866088bc7.png

Cool! We’ve now done a quick analysis of the SMS dataset and found some differences in the distributions of some readability and dependency-distance metrics between the actual SMS’s and spam.

Next steps could be continue the exploratory data analysis or to build a simple classifier using the extracted metrics.