diff --git a/1 - Reading Data.ipynb b/1 - Reading Data.ipynb index 2214c88..68c2eec 100644 --- a/1 - Reading Data.ipynb +++ b/1 - Reading Data.ipynb @@ -950,7 +950,7 @@ "source": [ "In addition to the core I/O functionality in pandas, there is also the [pandas-datareader](https://pandas-datareader.readthedocs.io/en/latest/) project. This package provides programmatic access to data sets from\n", "\n", - "* Yahoo! Finance\n", + "* Yahoo! Finance (deprecated)\n", "* Google Finance\n", "* Enigma\n", "* Quandl\n", @@ -959,9 +959,11 @@ "* World Bank\n", "* OECD\n", "* Eurostat\n", - "* EDGAR Index\n", + "* EDGAR Index (deprecated)\n", "* TSP Fund Data\n", - "* Nasdaq Trader Symbol Definitions" + "* Nasdaq Trader Symbol Definitions\n", + "* Morningstar\n", + "* Etc." ] }, { @@ -975,7 +977,11 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Sometimes we need to be resourceful in order to get data. Knowing how to scrape the web can really come in handy. We're not going to go into details today, but you'll likely find libraries like [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/), [lxml](http://lxml.de/), and [mechanize](https://mechanize.readthedocs.io/en/latest/) to be helpful. There's also a `read_html` function in pandas that will quickly scrape HTML tables for you and put them into a DataFrame. " + "Sometimes we need to be resourceful in order to get data. Knowing how to scrape the web can really come in handy.\n", + "\n", + "We're not going to go into details today, but you'll likely find libraries like [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/), [lxml](http://lxml.de/), and [mechanize](https://mechanize.readthedocs.io/en/latest/) to be helpful. \n", + "\n", + "There's also a `read_html` function in pandas that will quickly scrape HTML tables for you and put them into a DataFrame. " ] } ],