Skip to content

Commit

Permalink
Update info
Browse files Browse the repository at this point in the history
  • Loading branch information
Skipper Seabold committed Apr 29, 2018
1 parent b203d3d commit 28c57c0
Showing 1 changed file with 10 additions and 4 deletions.
14 changes: 10 additions & 4 deletions 1 - Reading Data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -950,7 +950,7 @@
"source": [
"In addition to the core I/O functionality in pandas, there is also the [pandas-datareader](https://pandas-datareader.readthedocs.io/en/latest/) project. This package provides programmatic access to data sets from\n",
"\n",
"* Yahoo! Finance\n",
"* Yahoo! Finance (deprecated)\n",
"* Google Finance\n",
"* Enigma\n",
"* Quandl\n",
Expand All @@ -959,9 +959,11 @@
"* World Bank\n",
"* OECD\n",
"* Eurostat\n",
"* EDGAR Index\n",
"* EDGAR Index (deprecated)\n",
"* TSP Fund Data\n",
"* Nasdaq Trader Symbol Definitions"
"* Nasdaq Trader Symbol Definitions\n",
"* Morningstar\n",
"* Etc."
]
},
{
Expand All @@ -975,7 +977,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Sometimes we need to be resourceful in order to get data. Knowing how to scrape the web can really come in handy. We're not going to go into details today, but you'll likely find libraries like [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/), [lxml](http://lxml.de/), and [mechanize](https://mechanize.readthedocs.io/en/latest/) to be helpful. There's also a `read_html` function in pandas that will quickly scrape HTML tables for you and put them into a DataFrame. "
"Sometimes we need to be resourceful in order to get data. Knowing how to scrape the web can really come in handy.\n",
"\n",
"We're not going to go into details today, but you'll likely find libraries like [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/), [lxml](http://lxml.de/), and [mechanize](https://mechanize.readthedocs.io/en/latest/) to be helpful. \n",
"\n",
"There's also a `read_html` function in pandas that will quickly scrape HTML tables for you and put them into a DataFrame. "
]
}
],
Expand Down

0 comments on commit 28c57c0

Please sign in to comment.