Releases: chartbeat-labs/textacy
v0.6.3
New:
- Added a proper contributing guide and code of conduct, as well as separate
GitHub issue templates for different user situations. This should help folks
contribute to the project more effectively, and make maintaining it a bit easier,
too. [Issue #212] - Gave the documentation a new look, using a template popularized by
requests
.
Added documentation on dealing with multi-lingual datasets. [Issue #233] - Made some minor adjustments to package dependencies, the way they're specified,
and the Travis CI setup, making for a faster and better development experience. - Confirmed and enabled compatibility with v2.1+ of
spacy
. 💫
Changed:
- Improved the
Wikipedia
dataset class in a variety of ways: it can now read
Wikinews db dumps; access records in namespaces other than the usual "0"
(such as category pages in namespace "14"); parse and extract category pages
in several languages, including in the case of bad wiki markup; and filter out
section headings from the accompanying text via aninclude_headings
kwarg.
[PR #219, #220, #223, #224, #231] - Removed the
transliterate_unicode()
preprocessing function that transliterated
non-ascii text into a reasonable ascii approximation, for technical and
philosophical reasons. Also removed its GPL-licensedunidecode
dependency,
for legal-ish reasons. [Issue #203] - Added convention-abiding
exclude
argument to the function that writes
spacy
docs to disk, to limit which pipeline annotations are serialized.
Replaced the existing but non-standardinclude_tensor
arg. - Deprecated the
n_threads
argument inCorpus.add_texts()
, which had not
been working inspacy.pipe
for some time and, as of v2.1, is defunct. - Made many tests model- and python-version agnostic and thus less likely to break
whenspacy
releases new and improved models. - Auto-formatted the entire code base using
black
; the results aren't always
more readable, but they are pleasingly consistent.
Fixed:
- Fixed bad behavior of
key_terms_from_semantic_network()
, where an error
would be raised if no suitable key terms could be found; now, an empty list
is returned instead. [Issue #211] - Fixed variable name typo so
GroupVectorizer.fit()
actually works. [Issue #215] - Fixed a minor typo in the quick-start docs. [PR #217]
- Check for and filter out any named entities that are entirely whitespace,
seemingly caused by an issue inspacy
. - Fixed an undefined variable error when merging spans. [Issue #225]
- Fixed a unicode/bytes issue in experimental function for deserializing
spacy
docs in "binary" format. [Issue #228, PR #229]
Contributors:
Many thanks to @abevieiramota, @ckot, @Jude188, and @digest0r for their help!
v0.6.2
Changes:
- Add a
spacier.util
module, and add / reorganize relevant functionality- move (most)
spacy_util
functions here, and add a deprecation warning to
thespacy_util
module - rename
normalized_str()
=>get_normalized_text()
, for consistency and clarity - add a function to split long texts up into chunks but combine them into
a singleDoc
. This is a workaround for a current limitation of spaCy's
neural models, whose RAM usage scales with the length of input text.
- move (most)
- Add experimental support for reading and writing spaCy docs in binary format,
where multiple docs are contained in a single file. This functionality was
supported by spaCy v1, but is not in spaCy v2; I've implemented a workaround
that should work well in most situations, but YMMV. - Package documentation is now "officially" hosted on GitHub pages. The docs
are automatically built on and deployed from Travis viadoctr
, so they
stay up-to-date with the master branch on GitHub. Maybe someday I'll get
ReadTheDocs to successfully buildtextacy
once again...- Minor improvements/updates to documentation
Bugfixes:
- Add missing return statement in deprecated
text_stats.flesch_readability_ease()
function (Issue #191) - Catch an empty graph error in bestcoverage-style keyterm ranking (Issue #196)
- Fix mishandling when specifying a single named entity type to in/exclude in
extract.named_entities
(Issue #202) - Make
networkx
usage in keyterms module compatible with v1.11+ (Issue #199)
v0.6.1
Changes:
-
Add a new
spacier
sub-package for spaCy-oriented functionality (#168, #187)- Thus far, this includes a
components
module with two custom spaCy
pipeline components: one to compute text stats on parsed documents, and
another to merge named entities into single tokens in an efficient manner.
More to come! - Similar functionality in the top-level
spacy_pipelines
module has been
deprecated; it will be removed in v0.7.0.
- Thus far, this includes a
-
Update the readme, usage, and API reference docs to be clearer and (I hope)
more useful. (#186) -
Removing punctuation from a text via the
preprocessing
module now replaces
punctuation marks with a single space rather than an empty string. This gives
better behavior in many situations; for example, "won't" => "won t" rather than
"wont", the latter of which is a valid word with a different meaning. -
Categories are now correctly extracted from non-English language Wikipedia
datasets, starting with French and German and extendable to others. (#175) -
Log progress when adding documents to a corpus. At the debug level, every
doc's addition is logged; at the info level, only one message per batch
of documents is logged. (#183)
Bugfixes:
- Fix two breaking typos in
extract.direct_quotations()
. (issue #177) - Prevent crashes when adding non-parsed documents to a
Corpus
. (#180) - Fix bugs in
keyterms.most_discriminating_terms()
that usedvsm
functionality as it was before the changes in v0.6.0. (#189) - Fix a breaking typo in
vsm.matrix_utils.apply_idf_weighting()
, and rename
the problematic kwarg for consistency with related functions. (#190)
Contributors:
Big thanks to @sammous, @dixiekong (nice name!), and @SandyRogers for the pull
requests, and many more for pointing out various bugs and the rougher edges /
unsupported use cases of this package.
Improved I/O, VSM, docs, and more
Changes:
-
Rename, refactor, and extend I/O functionality (PR #151)
- Related read/write functions were moved from
read.py
andwrite.py
into
format-specific modules, and similar functions were consolidated into one
with the addition of an arg. For example,write.write_json()
and
write.write_json_lines()
=>json.write_json(lines=True|False)
. - Useful functionality was added to a few readers/writers. For example,
write_json()
now automatically handles python dates/datetimes, writing
them to disk as ISO-formatted strings rather than raising a TypeError
("datetime is not JSON serializable", ugh). CSVs can now be written to /
read from disk when each row is a dict rather than a list. Reading/writing
HTTP streams now allows for basic authentication. - Several things were renamed to improve clarity and consistency from a user's
perspective, most notably the subpackage name:fileio
=>io
. Others:
read_file()
andwrite_file()
=>read_text()
andwrite_text()
;
split_record_fields()
=>split_records()
, although I kept an alias
to the old function for folks;auto_make_dirs
boolean kwarg =>make_dirs
. io.open_sesame()
now handles zip files (provided they contain only 1 file)
as it already does for gzip, bz2, and lzma files. On a related note, Python 2
users can now open lzma (.xz
) files if they've installedbackports.lzma
.
- Related read/write functions were moved from
-
Improve, refactor, and extend vector space model functionality (PRs #156 and #167)
-
BM25 term weighting and document-length normalization were implemented, and
and users can now flexibly add and customize individual components of an
overall weighting scheme (local scaling + global scaling + doc-wise normalization).
For API sanity, several additions and changes to theVectorizer
init
params were required --- sorry bout it! -
Given all the new weighting possibilities, a
Vectorizer.weighting
attribute
was added for curious users, to give a mathematical representation of how
values in a doc-term matrix are being calculated. Here's a simple and a
not-so-simple case:>>> Vectorizer(apply_idf=True, idf_type='smooth').weighting 'tf * log((n_docs + 1) / (df + 1)) + 1' >>> Vectorizer(tf_type='bm25', apply_idf=True, idf_type='bm25', apply_dl=True).weighting '(tf * (k + 1)) / (tf + k * (1 - b + b * (length / avg(lengths))) * log((n_docs - df + 0.5) / (df + 0.5))'
-
Terms are now sorted alphabetically after fitting, so you'll have a consistent
and interpretable ordering in your vocabulary and doc-term-matrix. -
A
GroupVectorizer
class was added, as a child ofVectorizer
and
an extension of typical document-term matrix vectorization, in which each
row vector corresponds to the weighted terms co-occurring in a single document.
This allows for customized grouping, such as by a shared author or publication year,
that may span multiple documents, without forcing users to merge /concatenate
those documents themselves. -
Lastly, the
vsm.py
module was refactored into avsm
subpackage with
two modules. Imports should stay the same, but the code structure is now
more amenable to future additions.
-
-
Miscellaneous additions and improvements
- Flesch Reading Ease in the
textstats
module is now multi-lingual! Language-
specific formulations for German, Spanish, French, Italian, Dutch, and Russian
were added, in addition to (the default) English. (PR #158, prompted by Issue #155) - Runtime performance, as well as docs and error messages, of functions for
generating semantic networks from lists of terms or sentences were improved. (PR #163) - Labels on named entities from which determiners have been dropped are now
preserved. There's still a minor gotcha, but it's explained in the docs. - The size of
textacy
's data cache can now be set via an environment
variable,TEXTACY_MAX_CACHE_SIZE
, in case the default 2GB cache doesn't
meet your needs. - Docstrings were improved in many ways, large and small, throughout the code.
May they guide you even more effectively than before! - The package version is now set from a single source. This isn't for you so
much as me, but it does prevent confusing version mismatches b/w code, pypi,
and docs. - All tests have been converted from
unittest
topytest
style. They
run faster, they're more informative in failure, and they're easier to extend.
- Flesch Reading Ease in the
Bugfixes:
- Fixed an issue where existing metadata associated with a spacy Doc was being
overwritten with an empty dict when using it to initialize a textacy Doc.
Users can still overwrite existing metadata, but only if they pass in new data. - Added a missing import to the README's usage example. (#149)
- The intersphinx mapping to
numpy
got fixed (and items forscipy
and
matplotlib
were added, too). Taking advantage of that, a bunch of broken
object links scattered throughout the docs got fixed. - Fixed broken formatting of old entries in the changelog, for your reading pleasure.
spacy v2.0 compatibility and lots of cleanup
Changes:
-
Bumped version requirement for spaCy from < 2.0 to >= 2.0 --- textacy no longer
works with spaCy 1.x! It's worth the upgrade, though. v2.0's new features and
API enabled (or required) a few changes on textacy's endtextacy.load_spacy()
takes the same inputs as the newspacy.load()
,
i.e. a packagename
string and an optional list of pipes todisable
- textacy's
Doc
metadata and language string are now stored inuser_data
directly on the spaCyDoc
object; although the API from a user's perspective
is unchanged, this made the next change possible Doc
andCorpus
classes are now de/serialized via pickle into a single
file --- no more side-car JSON files for metadata! Accordingly, the.save()
and.load()
methods on both classes have a simpler API: they take
a single string specifying the file on disk where data is stored.
-
Cleaned up docs, imports, and tests throughout the entire code base.
- docstrings and https://textacy.readthedocs.io 's API reference are easier to
read, with better cross-referencing and far fewer broken web links - namespaces are less cluttered, and textacy's source code is easier to follow
import textacy
takes less than half the time from before- the full test suite also runs about twice as fast, and most tests are now
more robust to changes in the performance of spaCy's models - consistent adherence to conventions eases users' cognitive load :)
- docstrings and https://textacy.readthedocs.io 's API reference are easier to
-
The module responsible for caching loaded data in memory was cleaned up and
improved, as well as renamed: fromdata.py
tocache.py
, which is more
descriptive of its purpose. Otherwise, you shouldn't notice much of a difference
besides things working correctly.- All loaded data (e.g. spacy language pipelines) is now cached together in a
single LRU cache whose max size is set to 2GB, and the size of each element
in the cache is now accurately computed. (tl;dr:sys.getsizeof
does not
work on non-built-in objects like, say, aspacy.tokens.Doc
.) - Loading and downloading of the DepecheMood resource is now less hacky and
weird, and much closer to how users already deal with textacy's various
Dataset
s, In fact, it can be downloaded in exactly the same way as the
datasets via textacy's new CLI:$ python -m textacy download depechemood
.
P.S. A brief guide for using the CLI got added to the README.
- All loaded data (e.g. spacy language pipelines) is now cached together in a
-
Several function/method arguments marked for deprecation have been removed.
If you've been ignoring the warnings that print out when you uselemmatize=True
instead ofnormalize='lemma'
(etc.), now is the time to update your calls!- Of particular note: The
readability_stats()
function has been removed;
useTextStats(doc).readability_stats
instead.
- Of particular note: The
Bugfixes:
- In certain situations, the text of a spaCy span was being returned without
whitespace between tokens; that has been avoided in textacy, and the source bug
in spaCy got fixed (by yours truly! explosion/spaCy#1621). - When adding already-parsed
Doc
s to aCorpus
, includingmetadata
now correctly overwrites any existing metadata on those docs. - Fixed a couple related issues involving the assignment of a 2-letter language
string to the.lang
attribute ofDoc
andCorpus
objects. - textacy's CLI wasn't correctly handling certain dataset kwargs in all cases;
now, all kwargs get to their intended destinations.
v0.4.2
Changes:
- Added a CLI for downloading
textacy
-related data, inspired by thespaCy
equivalent. It's temporarily undocumented, but to see available commands and
options, just pass the usual flag:$ python -m textacy --help
. Expect more
functionality (and docs!) to be added soonish. (#144)- Note: The existing
Dataset.download()
methods work as before, and in fact,
they are being called under the hood from the command line.
- Note: The existing
- Made usage of
networkx
v2.0-compatible, and therefore dropped the <2.0
version requirement on that dependency. Upgrade as you please! (#131) - Improved the regex for identifying phone numbers so that it's easier to view
and interpret its matches. (#128)
Bugfixes:
- Fixed caching of counts on
textacy.Doc
to make it instance-specific, rather than
shared by all instances of the class. Oops. - Fixed currency symbols regex, so as not to replace all instances of the letter "z"
when a custom string is passed intoreplace_currency_symbols()
. (#137) - Fixed README usage example, which skipped downloading of dataset data. Btw,
see above for another way! (#124) - Fixed typo in the API reference, which included the SupremeCourt dataset twice
and omitted the RedditComments dataset. (#129) - Fixed typo in
RedditComments.download()
that prevented it from downloading
any data. (#143)
Contributors:
Many thanks to @asifm, @harryhoch, and @mdlynch37 for submitting PRs!
0.4.1
Changes:
- Added key classes to the top-level
textacy
imports, for convenience:textacy.text_stats.TextStats
=>textacy.TextStats
textacy.vsm.Vectorizer
=>textacy.Vectorizer
textacy.tm.TopicModel
=>textacy.TopicModel
- Added tests for
textacy.Doc
and updated the README's usage example
Bugfixes:
- Added explicit encoding when opening Wikipedia database files in text mode to
resolve an issue when doing so without encoding on Windows (PR #118) - Fixed
keyterms.most_discriminating_terms
to use thevsm.Vectorizer
class
rather than thevsm.doc_term_matrix
function that it replaced (PR #120) - Fixed mishandling of a couple optional args in
Doc.to_terms_list
Contributors:
Thanks to @minketeer and @Gregory-Howard for the fixes!
Datasets, vectorization, and some customizability
Changes:
- Refactored and expanded built-in
corpora
, now calleddatasets
(PR #112)- The various classes in the old
corpora
subpackage had a similar but
frustratingly not-identical API. Also, some fetched the corresponding dataset
automatically, while others required users to do it themselves. Ugh. - These classes have been ported over to a new
datasets
subpackage; they
now have a consistent API, consistent features, and consistent documentation.
They also have some new functionality, including pain-free downloading of
the data and saving it to disk in a stream (so as not to use all your RAM). - Also, there's a new dataset: A collection of 2.7k Creative Commons texts
from the Oxford Text Archive, which rounds out the included datasets with
English-language, 16th-20th century literary works. (h/t @JonathanReeve)
- The various classes in the old
- A
Vectorizer
class to convert tokenized texts into variously weighted
document-term matrices (Issue #69, PR #113)- This class uses the familiar
scikit-learn
API (which is also consistent
with thetextacy.tm.TopicModel
class) to convert one or more documents
in the form of "term lists" into weighted vectors. An initial set of documents
is used to build up the matrix vocabulary (via.fit()
), which can then
be applied to new documents (via.transform()
). - It's similar in concept and usage to sklearn's
CountVectorizer
or
TfidfVectorizer
, but doesn't convolve the tokenization task as they do.
This means users have more flexibility in deciding which terms to vectorize.
This class outright replaces thetextacy.vsm.doc_term_matrix()
function.
- This class uses the familiar
- Customizable automatic language detection for
Doc
s- Although
cld2-cffi
is fast and accurate, its installation is problematic
for some users. Since other language detection libraries are available
(e.g.langdetect
and
langid
), it makes sense to let
users choose, as needed or desired. - First,
cld2-cffi
is now an optional dependency, i.e. is not installed
by default. To install it, dopip install textacy[lang]
or (for it and
all other optional deps) dopip install textacy[all]
. (PR #86) - Second, the
lang
param used to instantiateDoc
objects may now
be a callable that accepts a unicode string and returns a standard 2-letter
language code. This could be a function that useslangdetect
under the
hood, or a function that always returns "de" -- it's up to users. Note that
the default value is nowtextacy.text_utils.detect_language()
, which
usescld2-cffi
, so the default behavior is unchanged.
- Although
- Customizable punctuation removal in the
preprocessing
module (Issue #91)- Users can now specify which punctuation marks they wish to remove, rather
than always removing all marks. - In the case that all marks are removed, however, performance is now 5-10x
faster by using Python's built-instr.translate()
method instead of
a regular expression.
- Users can now specify which punctuation marks they wish to remove, rather
textacy
, installable viaconda
(PR #100)- The package has been added to Conda-Forge (here),
and installation instructions have been added to the docs. Hurray!
- The package has been added to Conda-Forge (here),
textacy
, now with helpful badges- Builds are now automatically tested via Travis CI, and there's a badge in
the docs showing whether the build passed or not. The days of my ignoring
broken tests inmaster
are (probably) over... - There are also badges showing the latest releases on GitHub, pypi, and
conda-forge (see above).
- Builds are now automatically tested via Travis CI, and there's a badge in
Bugfixes:
- Fixed the check for overlap between named entities and unigrams in the
Doc.to_terms_list()
method (PR #111) Corpus.add_texts()
uses CPU_COUNT - 1 threads by default, rather than
always assuming that 4 cores are available (Issue #89)- Added a missing coding declaration to a test file, without which tests failed
for Python 2 (PR #99) readability_stats()
now catches an exception raised on empty documents and
logs a message, rather than barfing with an unhelpfulZeroDivisionError
.
(Issue #88)- Added a check for empty terms list in
terms_to_semantic_network
(Issue #105) - Added and standardized module-specific loggers throughout the code base; not
a bug per sé, but certainly some much-needed housecleaning - Added a note to the docs about expectations for bytes vs. unicode text (PR #103)
Contributors:
Thanks to @henridwyer, @rolando, @pavlin99th, and @kyocum for their contributions!
🙌
v0.3.4
Changes:
- Improved and expanded calculation of basic counts and readability statistics in
text_stats
module.- Added a
TextStats()
class for more convenient, granular access to individual values. See usage docs for more info. When calculating, say, just one readability statistic, performance with this class should be slightly better; if calculating all statistics, performance is worse owing to unavoidable, added overhead in Python for variable lookups. The legacy functiontext_stats.readability_stats()
still exists and behaves as before, but a deprecation warning is displayed. - Added functions for calculating Wiener Sachtextformel (PR #77), LIX, and GULPease readability statistics.
- Added number of long words and number of monosyllabic words to basic counts.
- Added a
- Clarified the need for having spacy models installed for most use cases of textacy, in addition to just the spacy package.
- README updated with comments on this, including links to more extensive spacy documentation. (Issues #66 and #68)
- Added a function,
compat.get_config()
that includes information about which (if any) spacy models are installed. - Recent changes to spacy, including a warning message, will also make model problems more apaprent.
- Added an
ngrams
parameter tokeyterms.sgrank()
, allowing for more flexibility in specifying valid keyterm candidates for the algorithm. (PR #75) - Dropped dependency on
fuzzywuzzy
package, replacing usage offuzz.token_sort_ratio()
with a textacy equivalent in order to avoid license incompatibilities. As a bonus, the new code seems to perform faster! (Issue #62)- Note: Outputs are now floats in [0.0, 1.0], consistent with other similarity functions, whereas before outputs were ints in [0, 100]. This has implications for
match_threshold
values passed tosimilarity.jaccard()
; a warning is displayed and the conversion is performed automatically, for now.
- Note: Outputs are now floats in [0.0, 1.0], consistent with other similarity functions, whereas before outputs were ints in [0, 100]. This has implications for
- A MANIFEST.in file was added to include docs, tests, and distribution files in the source distribution. This is just good practice. (PR #65)
Bugfixes:
- Known acronym-definition pairs are now properly handled in
extract.acronyms_and_definitions()
(Issue #61) - WikiReader no longer crashes on null page element content while parsing (PR #64)
- Fixed a rare but perfectly legal edge case exception in
keyterms.sgrank()
, and added a window width sanity check. (Issue #72) - Fixed assignment of 2-letter language codes to
Doc
andCorpus
objects when the lang parameter is specified as a full spacy model name. - Replaced several leftover print statements with proper logging functions.
Contributors:
Big thanks to @oroszgy, @rolando, @covuworie, and @RolandColored for the pull requests!
v0.3.3
Changes:
- Added a consistent
normalize
param to functions and methods that require token/span text normalization. Typically, it takes one of the following values: 'lemma' to lemmatize tokens, 'lower' to lowercase tokens, False-y to not normalize tokens, or a function that converts a spacy token or span into a string, in whatever way the user prefers (e.g.spacy_utils.normalized_str()
).- Functions modified to use this param:
Doc.to_bag_of_terms()
,Doc.to_bag_of_words()
,Doc.to_terms_list()
,Doc.to_semantic_network()
,Corpus.word_freqs()
,Corpus.word_doc_freqs()
,keyterms.sgrank()
,keyterms.textrank()
,keyterms.singlerank()
,keyterms.key_terms_from_semantic_network()
,network.terms_to_semantic_network()
,network.sents_to_semantic_network()
- Functions modified to use this param:
- Tweaked
keyterms.sgrank()
for higher quality results and improved internal performance. - When getting both n-grams and named entities with
Doc.to_terms_list()
, filtering out numeric spans for only one is automatically extended to the other. This prevents unexpected behavior, such as passingfilter_nums=True
but getting numeric named entities back in the terms list.
Bufixes:
keyterms.sgrank()
no longer crashes if a term is missing fromidfs
mapping. (@jeremybmerrill, issue #53)- Proper nouns are no longer excluded from consideration as keyterms in
keyterms.sgrank()
andkeyterms.textrank()
. (@jeremybmerrill, issue #53) - Empty strings are now excluded from consideration as keyterms — a bug inherited from spaCy. (@mlehl88, issue #58)