npm install wtf-plugin-api
Some helper methods for getting additional data from the wikimedia api.
the main wtf_wikipedia
library has a few basic methods for fetching data from the wikipedia api -
you can get an article with .fetch()
, a category with .category()
or a random page with .random()
.
There are a bunch of cool ways to get data from the API though, and this plugin tries to help with that.
Please use the wikipedia API respectfully. This is not meant to be used at high-volumes. If you are seeking information on for many wikipedia pages, consider parsing the dump instead. There are also ways to batch requests, to reduce strain on wikimedia servers. These methods are meant to be simple wrappers for quick access.
Where appropriate, this plugin throttles requests to max 3 at-a-time.
to install:
const wtf = require('wtf_wikipedia')
wtf.extend(require('wtf-plugin-api'))
<script src="https://unpkg.com/wtf_wikipedia"></script>
<script src="https://unpkg.com/wtf-plugin-api"></script>
<script defer>
wtf.plugin(window.wtfApi)
wtf.fetch('Radiohead', function (err, doc) {
console.log(doc.getRedirects())
})
</script>
Redirects are an assortment of alternative names and mis-spellings for a wikipedia page. They can be a rich source of data. On wikipedia, you can see all the redirects for a page here
// fetch all a page's redirects
let doc = await wtf.fetch('Toronto Raptors')
let redirects = await doc.getRedirects()
console.log(redirects)
/*
[
{ title: 'the raptors' },
{ title: 'We The North' },
...
]
*/
You can also get all pages that link to this page.
// get all pages that link to this document
let doc = await wtf.fetch('Toronto Raptors')
let list = await doc.getIncoming()
console.log(list)
/*
[
{ title: 'Toronto' },
{ title: 'Jurassic Park (film)' },
{ title: 'National Basketball Association' },
...
]
*/
By default, this method only returns full pages, and not redirects, or talk-pages.
Wikipedia provides daily page-view information providing a rough metric on a topic's popularity.
let doc = await wtf.fetch('Toronto Raptors')
let byDay = await doc.getPageViews()
console.log(byDay)
/*
{
'2020-08-30': 4464,
'2020-08-31': 2739,
'2020-09-01': 3774,
'2020-09-02': 3347,
'2020-09-03': 3569,
...
}
*/
get the name of a random wikipedia page, from a given wiki
wtf.getRandomPage({lang:'fr'}).then(doc=>{
console.log(doc.title())
// 'Édifice religieux à Paris'
})
get the name of a random wikipedia category, from a given wiki
wtf.getRandomCategory({lang:'fr'}).then(cat=>{
console.log(cat)
// 'Catégorie:Édifice religieux à Paris'
})
fetch all documents and sub-categories in a given category. Only returns identifying information for the page, not the actual page content.
wtf.getCategoryPages('Major League Baseball venues').then(pages => {
pages.map(page => page.title)
// [
// 'List of current Major League Baseball stadiums',
// 'List of former Major League Baseball stadiums'
// ...
// 'Category:Spring training ballparks',
// 'Category:Wrigley Field'
//]
})
Pages can be retrieved cursively from all sub-categories by passing recursive: true
as part of options:
wtf.getCategoryPages('Major League Baseball venues', {recursive: true})
To exclude certain categories from being expanded, specify these as part of categoryExclusions
. The categories to exclude must be specified with the Category:
prefix, but without the underscores commonly seen in wikipedia page titles. Note that the category pages themselves will still be returned, but the pages within those sub-categories will not.
wtf.getCategoryPages('Major League Baseball venues', {
recursive: true,
categoryExclusions: [
'Category:Defunct Major League Baseball venues',
'Category:Major League ballpark logos'
]
})
As a safety limit, a maximum depth can be specified which limits how many sub-categories recursive mode will traverse down. This is off by default.
wtf.getCategoryPages('Major League Baseball venues', {recursive: true, maxDepth: 2})
Sometimes you want to get all the data for one infobox, or template - in all the pages it belongs to. You can see a list pages of a specific template with this tool, and you can get a approximate count of pages with this tool
This method fetches+parses all documents that use (aka 'transclude') a specific template or infobox. You can get the name of the template from viewing the page's source. Sometimes you need to add a 'Template: ' to the start of it, sometimes you don't.
to
// parse all the chefs wikipedia articles
wtf.getTemplatePages('Template:Switzerland-badminton-bio-stub').then(docs => {
docs.forEach(doc => {
let height=doc.infobox(0).get('height')
console.log(doc.title(), height)
})
})
wtf.fetchList()
will fetch an array of articles, in a throttled-way.
It is built to work in-concert with the other methods in this plugin, so you can compose them like this:
let pages = await wtf.getTemplatePages('Template:Switzerland-badminton-bio-stub')
let docs = await wtf.fetchList(pages)
// grab infobox data of each badminton player:
docs.forEach((doc) => {
let infobox = doc.infobox(0)
if (infobox && infobox.get('height')) {
console.log(doc.title(), infobox.get('height').text())
}
})
// Christian Boesiger 1.73 m
// Sabrina Jaquet 1.69m
// Céline Burkart 1.65 m
// Oliver Schaller 1.80 m
// Anthony Dumartheray 1.78 m
// Ayla Huser 1.68 m
-
doc.getRedirects() - fetch all pages that redirect to this document
-
doc.getIncoming() - fetch all pages that link to this document
-
doc.getPageViews() - daily traffic report for this document
-
wtf.getRandomCategory() - get the name of a random wikipedia category
-
wtf.getTemplatePages() - fetches all pages that use a specific template or infobox
-
wtf.getCategoryPages() - fetch all pages in a specified category
MIT