Skip to content

Commit

Permalink
Merge branch 'master' into Installation-Documentation-#57
Browse files Browse the repository at this point in the history
  • Loading branch information
RyanHiltyAllegheny authored Apr 28, 2021
2 parents 9d79f15 + 9a7dff4 commit f9c22b1
Show file tree
Hide file tree
Showing 12 changed files with 616 additions and 317 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ jobs:
runs-on: ${{ matrix.os }}-latest
strategy:
matrix:
os: [Ubuntu, MacOS]
os: [Ubuntu, MacOS, Windows]
python-version: [3.7, 3.8]
steps:
- uses: actions/checkout@v2
Expand Down
1 change: 1 addition & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,5 +39,6 @@ script:
- pipenv run flake8 src
- pipenv run flake8 tests
- mdl README.md
- mdl docs
after_success:
- pipenv run codecov
1 change: 1 addition & 0 deletions Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ textblob = "*"
scipy = "*"
pylint = "*"
importlib-metadata = "*"
atomicwrites = "*"

[pipenv]
allow_prereleases = true
615 changes: 328 additions & 287 deletions Pipfile.lock

Large diffs are not rendered by default.

133 changes: 133 additions & 0 deletions docs/LANDING_PAGE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
# Welcome to GatorMiner!

GatorMiner is an automated text-mining tool written in Python to measure the technical
responsibility of students in computer science courses. It is being used to analyze
students' markdown reflection documents and five questions survey based on
Natural Language Processing in the Department of Computer Science at Allegheny
College.

## Data Retrieving

There are currently two ways to import text data for analysis: through local file system or AWS DynamoDB.

### Local File System

What is a local file system?

- A controlled place where data can be stored and received. In this case, this
is where GatorMiner keeps data isolated so it can be easily identified.

In GatorMiner, you can type in the path(s) to the directories(s) that hold
reflection markdown documents. You are welcome to try the tool with the sample documents we provided
in the 'resources', for example:

```
resources/sample_md_reflections/lab1, resources/sample_md_reflections/lab2, resources/sample_md_reflections/lab3
```

### AWS

Retrieving reflection documents from AWS is a feature integrated with the use
of [GatorGrader](https://github.com/GatorEducator/gatorgrader) where students'
markdown reflection documents are being collected and stored inside the a
pre-configured DynamoDB database. In order to use this feature, you will need
to have some credential tokens (listed below) stored as environment variables:

```
export GATOR_ENDPOINT=<Your Endpoint>
export GATOR_API_KEY=<Your API Key>
export AWS_ACCESS_KEY_ID=<Your Access Key ID>
export AWS_SECRET_ACCESS_KEY=<Your Secret Access Key>
```

It is likely that you already have these prepared when using GatorMiner in
conjunction with GatorGrader, since these would already be exported when
setting up the AWS services. You can read more about setting up an AWS service
with GatorGrader [here](https://github.com/enpuyou/script-api-lambda-dynamodb).

Once the documents are successfully imported, you can then navigate through
the select box in the sidebar to view the text analysis:

<img src="resources/images/select_box.png" alt="browser" style="width:100%"/>

## Analysis

### Frequency Analysis

Frequency analysis is the quantification and analysis of word usage in text (how often a word appears within a certain text). Overall, frequency analysis can provide amazing insight into the many aspects of assignments that instructors may not always be able to observe. There is a lot of value in making this information available in a user-friendly and intuitive fashion. This can be achieved using GatorMiner frequency analysis.

Within the GatorMiner tool, you have the ability to choose `Frequency Analysis` as an analysis option after the path to the desired reflection documents is submitted.

When the tool runs a frequency analysis, on any number of assignments, it provides 3 different options to choose from:

- Overall
- Student
- Question

When `Overall` is selected, the application will display a vertical bar chart containing a list of the words used with the highest frequency for each given assignment.

When `Student` is selected, a dropdown menu is provided allowing you to pick which student the tool should display frequency data for. As with `Overall`, this data is also displayed as a vertical bar chart and you can display multiple students' data on the same page in order to compare and contrast the types of words that are being used by student.

Finally, when `Question` is selected, the option to pick one or more specific questions appears. The tool then produces and displays a vertical bar chart which contains frequency information for each of the selected questions in the assignment. This is helpful for comparing the ways in which different terms are utilized within different questions in an assignment.

<img src="resources/images/frequency.png" alt="browser" style="width:100%"/>

### Sentiment Analysis

Sentiment analysis (or opinion mining) is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information. Overall,
this is a technique to determine whether data is positive, negative, or neutral.

Within the GatorMiner tool, you have the ability to choose `Sentiment Analysis` as an analysis option after the path to the desired reflection documents is submitted.

When the tool runs a Sentiment analysis, on any number of assignments, it provides 3 different options to choose from:

- Overall
- Student
- Question

When `Overall` is selected, a scatter plot and a bar chart appear on the screen
displaying the overall sentiment polarity in, for example, assignment-01 given by the users.

When `Student` is selected, it allows the user to choose a specific student to
observe. When chosen it shows the sentiment shown by the chosen user with a mini bar graph and a bigger version of that using a histogram. Inside this feature, you can also change the number of plots per row.

Finally, when `Question` is selected, it allows the user to choose a certain question in the drop down menu. When chosen, it shows the user the sentiment the question was given.

<img src="resources/images/sentiment.png" alt="browser" style="width:100%"/>

### Document Similarity

Document similarity analyzes documents and compares text to determine frequency of words between documents.

Within the GatorMiner tool, you have the ability to choose `Document Similarity` as an analysis option after the path to the desired reflection documents is submitted.

In the `Document Similarity` section, you are able to select the type of similarity analysis `TF-IDF` and `Spacy`.

When `TF-IDF` is selected, the application will display a frequency matrix showing the correlation between documents. It does this buy dividing the frequency of the word by the total number of terms in a document.

When `Spacy` is selected, the application will display a drop down named 'Model name' with two options:

- `en_core_web_sm` which is used to produce a correlation matrix for **SMALLER** files. (<13mb)
- `en_core_web_md` which is used to produce a correlation matrix for **LARGER** files. (>13mb)

**Warning exceeding these file limits could cause the program to crash.**

**See [Spacy.io](https://spacy.io/models/en) for more details of file limits.**

<img src="resources/images/similarity.png" alt="browser" style="width:100%"/>

### Topic Modeling

Topic modeling analyzes documents to find keywords in order to determine the documents' dominant topics.

Within the GatorMiner tool, you have the ability to choose `Topic Modeling` as an analysis option after the path to the desired reflection documents is submitted.

In the `Topic Modeling` section, you are able to select the type of topic modeling analysis `Histogram` and `Scatter`.

When `Histogram` is selected, the application will display a histogram in which the dominant topic is on the x-axis and the count of records is on the y-axis. A legend in the top right corner will display the names of the reflection files new to the color that corresponds with them.

When `Scatter` is selected, the application will display a scatter plot. The legend on the right side will display the colors that correspond to topic numbers and the shapes that correspond with topics.

Sliders are also provided that can adjust the amount of topics or adjust the amount of words per topic.

<img src="resources/images/topic.png" alt="browser" style="width:100%"/>
28 changes: 28 additions & 0 deletions src/analyzer.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
"""Text Proprocessing"""
from collections import Counter
from textblob import TextBlob
import pandas as pd
import re
import string
from typing import List, Tuple
Expand Down Expand Up @@ -138,3 +140,29 @@ def noun_phrase(input_text):
for chunk in doc.noun_chunks:
n_phrase_lst.append(str(chunk))
return n_phrase_lst


def top_polarized_word(tokens_column):
"""Create columns for positive and negative words"""
# Start off with empty lists
pos_series = []
neg_series = []
# For each item in the already existing tokens column in the data frame,
for token_element in tokens_column:
words = sorted_sentiment_word_list(token_element)
# Add the words at the end of the list to the display series since
# they have the most positive sentiment value
pos_series.append(", ".join(words[::-1][0:3]))
neg_series.append(", ".join(words[0:3]))
# Return an entire series based on the display_series list
return pd.Series(pos_series), pd.Series(neg_series)


def sorted_sentiment_word_list(token_element):
"""Creates and sorts a word list from a list of tokens"""
# Convert the token list into a set so that it only has the unique words
words = set(token_element)
# Convert back into list to iterate through
unique_words = list(words)
# Returning the sorted list
return sorted(unique_words, key=lambda x: TextBlob(x).sentiment.polarity)
4 changes: 4 additions & 0 deletions src/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,7 @@
NORMAL = "normalized"
ASSIGNMENT = "assignment"
SENTI = "sentiment"

# Columns
POSITIVE = "Positive words"
NEGATIVE = "Negative words"
24 changes: 20 additions & 4 deletions src/markdown.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
"""Markdown parser"""
import os
import logging
from io import StringIO
from typing import Dict, List
import commonmark
import pandas as pd
Expand Down Expand Up @@ -36,19 +37,25 @@ def get_file_names(directory_name: str) -> List[str]:
return file_list


def merge_dict(dict_1, dict_2: Dict[str, str]) -> Dict[str, List[str]]:
def merge_dict(dict_1, dict_2: Dict[str, str], preserve: bool) -> \
Dict[str, List[str]]:
"""Merge two dictionaries and store values of common keys in list"""
if dict_1 is None:
dict_1 = {k: [] for k in dict_2.keys()}
elif isinstance(list(dict_1.values())[0], list) is False:
dict_1 = {k: [v] for k, v in dict_1.items()}
if(preserve):
for key in dict_2.keys():
if key not in dict_1:
dict_1[key] = []
for f in range(0, len(list(dict_1.values())[0])):
dict_1[key].append("")
for key in dict_1.keys():
try:
dict_1[key].append(dict_2[key])
except KeyError as err:
dict_1[key].append("")
logging.warning(f"Key does not exist: {err}")

return dict_1


Expand All @@ -58,7 +65,7 @@ def collect_md(directory: str, is_clean=True) -> Dict[str, List[str]]:
main_md_dict = None
for file in file_names:
individual_dict = md_parser(read_file(file), is_clean)
main_md_dict = merge_dict(main_md_dict, individual_dict)
main_md_dict = merge_dict(main_md_dict, individual_dict, False)
return main_md_dict


Expand Down Expand Up @@ -95,10 +102,19 @@ def md_parser(input_md: str, is_clean=True) -> Dict[str, str]:
md_dict[cur_heading] += subnode.literal + " "
else:
continue
print(md_dict)
return md_dict


def import_uploaded_files(paths: List) -> Dict[str, List[str]]:
"""Importing the individual files"""
main_md_dict = None
for path in paths:
stringio = StringIO(path.getvalue().decode("utf-8"))
individual_dict = md_parser(stringio.read(), True)
main_md_dict = merge_dict(main_md_dict, individual_dict, True)
return main_md_dict


def build_pd(md_dict):
"""build dictionary into dataframe"""
md_df = pd.DataFrame(md_dict)
Expand Down
2 changes: 2 additions & 0 deletions src/visualization.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,8 @@ def senti_circleplot(senti_df, student_id):
tooltip=[
alt.Tooltip(cts.SENTI, title="polarity"),
alt.Tooltip(student_id, title="author"),
alt.Tooltip(cts.POSITIVE, title="Positive words"),
alt.Tooltip(cts.NEGATIVE, title="Negative words"),
],
)
).interactive()
Expand Down
Loading

0 comments on commit f9c22b1

Please sign in to comment.