diff --git a/.readthedocs.yaml b/.readthedocs.yaml
new file mode 100644
index 0000000..7bc042b
--- /dev/null
+++ b/.readthedocs.yaml
@@ -0,0 +1,25 @@
+# Read the Docs configuration file for Sphinx project
+
+# Required
+version: 2
+
+# Set the OS, Python version and other tools you might need
+build:
+ os: ubuntu-22.04
+ tools:
+ python: "3.9"
+
+# Build documentation in the "docs/" directory with Sphinx
+sphinx:
+ configuration: docs/conf.py
+
+# Optionally build your docs in additional formats such as PDF and ePub
+# formats:
+# - pdf
+# - epub
+
+# Optional but recommended, declare the Python requirements required
+# to build your documentation
+python:
+ install:
+ - requirements: requirements.txt
\ No newline at end of file
diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 0000000..e0ef45a
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,81 @@
+# Changelog
+
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/) (with possible titles of 'added', 'changed', 'deprecated', 'removed', 'fixed', or 'security'),
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+This should align with:
+* Releases on [PyPI](https://pypi.org/project/kailo-beewell-dashboard/#history)
+* Releases on [GitHub](https://github.com/kailo-beewell/kailo_beewell_dashboard_package/releases) (which are like a non-portable changelog only displayed to users within GitHub)
+
+## 0.2.0
+
+**Release date:** 1st March 2024
+
+**Contributors:** Amy Heather
+
+Modified package so it can be used to produce the **synthetic symbol** #BeeWell survey dashboard, as well as the standard survey dashboard.
+
+https://github.com/kailo-beewell/kailo_beewell_dashboard_package/compare/main...amy
+
+### Added
+
+* Created `CHANGELOG.md` (with backdated entry for 0.1.0)
+* Created `CITATION.cff`
+* Created package documentation using Sphinx and readthedocs - `docs/` and `.readthedocs.yaml`
+* `create_group_list()` in `bar_charts.py` which creates a correctly formatted list of strings depending on number of inputs (e.g. 'a', 'a and b', 'a, b and c')
+* `grammar.py` - contains `lower_first()` which converts first letter of string to lower case, unless all other letters are upper case
+* Add `aggregate_demographic()` to `create_and_aggregate_data.py`
+* Add new alternative inputs, outputs and process to existing functions that were developed for standard survey, so they can be used to output equivalent content for symbol survey. This includes
+ * `authentication.py` - custom login screen text
+ * `create_and_aggregate_data.py` - two possible sets of groups to aggregate by
+ * `explore_results.py` - custom page text, and providing survey type to functions like `filter_by_group()`
+ * `import_data.py` - add names of data for symbol survey as for session_state and as in TiDB Cloud, and simplified import function
+ * `page_setup.py` - to use name 'symbol' survey in the page menu
+ * `reshape_data.py` - for differences with symbol (e.g. exc SEN)
+ * `response_labels.py` - new function `create_symbol_response_label_dict()`
+ * `static_report.py` - converting some of the processes into standalone functions, so they can be imported to old function making static standard report and new function making static symbol report
+ * `who_took_part.py` - modifications for symbol like different headers and no descriptive text
+
+### Changed
+
+* Moved all About page text to `reuse_text.py`
+* In `requirements.txt`, upgraded pip to 24.0 and streamlit to 1.31.1, and add packages used to produce documentation (sphinx, sphinx-rtd-theme, myst-parser, sphinx-autoapi)
+
+### Removed
+
+* Removed option to compare data import from TiDB to imported CSVs from the project directory (as function wasn't being used) in `import_data.py`
+
+### Fixed
+
+* Hide responses (and non-response) on 'who took part' if n<10 for a given response option (more strict than elsewhere on dashboard, which just requires n>=10 for the entire question). For this, modified `survey_responses()` in `bar_charts.py`
+
+## 0.1.0
+
+**Release date:** 21st February 2024
+
+**Contributors:** [Amy Heather](https://github.com/amyheather)
+
+First release of kailo-beewell-dashboard package on PyPi. Contains functions for production of **synthetic standard** #BeeWell survey dashboard (including static PDF version, which can be produced using the dashboard).
+
+### Added
+
+Functions used to generate and process data, and to produce the dashboard:
+* `authentication.py` - for user authentication using Django
+* `bar_charts.py` - to create bar charts of proportions of each survey responses, or ordered bar charts comparing scores
+* `bar_charts_text.py` - dictionary with descriptions to go above bar charts on the 'explore results' page
+* `convert_image.py` - converts a plotly figure to HTML string
+* `create_and_aggregate_data.py` - functions used to create and process the pupil-level data
+* `explore_results.py` - functions used for the 'explore results' page
+* `import_data.py` - connects to TiDB Cloud and imports data to session state
+* `page_setup.py` - page configuration, styling, formatting
+* `reshape_data.py` - to reshape data or extract a certain element, with functions often used across multiple different pages
+* `response_labels.py` - dictionary of labels to each of the question response options
+* `reuse_text.py` - two sections of text that were reused on different pages of the dashboard vs PDF report
+* `score_descriptions.py` - simple descriptions used on the 'explore results' page to support score interpretation
+* `static_report.py` - uses same functions as on the dashboard, to produce a static HTML report (containing same information as in dashboard)
+* `stylable_container.py` - produces stylised containers for streamlit
+* `summary_rag.py` - produces the red-amber-green boxes and 'summary' page introduction and table
+* `switch_page.py` - function to switch page programmatically
+* `who_took_part.py` - functions used for the 'who took part' page
\ No newline at end of file
diff --git a/CITATION.cff b/CITATION.cff
new file mode 100644
index 0000000..ef3fd7a
--- /dev/null
+++ b/CITATION.cff
@@ -0,0 +1,29 @@
+# This CITATION.cff file was generated with cffinit.
+# Visit https://bit.ly/cffinit to generate yours today!
+
+cff-version: 1.2.0
+title: kailo-beewell-dashboard
+message: >-
+ If you use this package, please cite it using the metadata
+ from this file.
+type: software
+authors:
+ - given-names: Amy
+ family-names: Heather
+ email: a.heather2@exeter.ac.uk
+ affiliation: University of Exeter
+ orcid: 'https://orcid.org/0000-0002-6596-3479'
+repository-code: >-
+ https://github.com/kailo-beewell/kailo_beewell_dashboard_package
+repository-artifact: 'https://pypi.org/project/kailo-beewell-dashboard/'
+abstract: >-
+ Tools to support creation of #BeeWell survey dashboards
+ for the Kailo project
+keywords:
+ - python
+ - streamlit
+ - '#BeeWell'
+ - Kailo
+license: MIT
+version: 0.2.0
+date-released: '2024-02-21'
diff --git a/README.md b/README.md
index ad6c0af..dd51e30 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,15 @@
# `kailo-beewell-dashboard`: tools to support creation of #BeeWell survey dashboards for the Kailo project.
-[![ORCID: Amy Heather](https://img.shields.io/badge/ORCID-0000--0002--6596--3479-brightgreen)](https://orcid.org/0000-0002-6596-3479)
+[![PyPI package](https://img.shields.io/badge/PyPI_package-0.2.0-2596be.svg)](https://pypi.org/project/kailo-beewell-dashboard/0.2.0/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
+[![ORCID: Amy Heather](https://img.shields.io/badge/ORCID_Amy_Heather-0000--0002--6596--3479-brightgreen)](https://orcid.org/0000-0002-6596-3479)
This package contains functions that are used in the creation of the various #BeeWell survey dashboards for Kailo. They have been compiled into a single package to prevent code duplication between the different repositories.
+**Package on PyPI:** https://pypi.org/project/kailo-beewell-dashboard/
+
+**Package documentation:** http://kailo-beewell-dashboard.readthedocs.io/
+
## Features
1. Functions used to generate and aggregate synthetic data
@@ -13,3 +18,20 @@ This package contains functions that are used in the creation of the various #Be
## How to install?
`pip install kailo-beewell-dashboard`
+
+## Citation
+
+If you use this package, please include the following citation
+
+> Heather, Amy. (2024). kailo-beewell-dashboard: tools to support creation of #BeeWell survey dashboards for the Kailo project (0.2.0). https://pypi.org/project/kailo-beewell-dashboard/.
+
+```tex
+@software{forecast_tools,
+ author = {Heather, Amy},
+ title = {kailo-beewell-dashboard},
+ date = {2024-03-01},
+ version = {0.2.0},
+ publisher = {PyPI},
+ url = {https://pypi.org/project/kailo-beewell-dashboard/}
+}
+```
\ No newline at end of file
diff --git a/docs/Makefile b/docs/Makefile
new file mode 100644
index 0000000..d4bb2cb
--- /dev/null
+++ b/docs/Makefile
@@ -0,0 +1,20 @@
+# Minimal makefile for Sphinx documentation
+#
+
+# You can set these variables from the command line, and also
+# from the environment for the first two.
+SPHINXOPTS ?=
+SPHINXBUILD ?= sphinx-build
+SOURCEDIR = .
+BUILDDIR = _build
+
+# Put it first so that "make" without argument is like "make help".
+help:
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
+
+.PHONY: help Makefile
+
+# Catch-all target: route all unknown targets to Sphinx using the new
+# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
+%: Makefile
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/docs/authentication.md b/docs/authentication.md
new file mode 100644
index 0000000..54657a2
--- /dev/null
+++ b/docs/authentication.md
@@ -0,0 +1,37 @@
+# Authentication
+
+This guide provides a step-by-step on how I set up the user authentication for this dashboard, which was based on these tutorials:
+* [https://towardsdatascience.com/secure-your-streamlit-app-with-django-bb0bee2a6519](https://towardsdatascience.com/secure-your-streamlit-app-with-django-bb0bee2a6519)
+* [https://towardsdatascience.com/streamlit-access-control-dae3ab8b7888](https://towardsdatascience.com/streamlit-access-control-dae3ab8b7888)
+
+## How to set up user authentication
+
+1. If not already in environment, install django (add to requirements.txt, recreate environment)
+2. Create a Django app. In terminal:
+ * `django-admin startproject config .`
+
+ This produced a config folder containing 5 .py files.
+3. Create a superuser to manage all other users:
+ * `python3 manage.py migrate`
+ * `python3 manage.py createsuperuser`
+
+ Then enter user as `kailobeewell` and the dartington email address. I have record of password personally which I can share with other team members.
+
+4. Start the server: `python3 manage.py runserver`.
+5. Open web-browser and go to http://localhost:8000/admin/. Login with the superuser username and password just created.
+6. Add some users by clicking '+ Add' next to Users, then entering a username and password for each user. For the synthetic dashboard, I create some basic logins (which need to be more secure for actual dashboard). For each user followed the format of:
+ * Username: schoola
+ * Password: schoolapassword
+7. Created the authentication.py script (see utilities folder)
+8. On each of the app pages, imported the check_password() function from authentication.py, then sat all the page code under it, eg:
+```
+import streatmlit as st
+from utilities.authentication import check_password
+
+if check_password():
+ st.title('Page title')
+ st.write('Page content)
+ ...
+```
+
+The actions above (migrate, superuser, adding users) will have generated and modified a db.sqlite3. Make sure you push this up to GitHub repository - I found that the app failed on deployment without it.
\ No newline at end of file
diff --git a/docs/changelog_link.md b/docs/changelog_link.md
new file mode 100644
index 0000000..8261b35
--- /dev/null
+++ b/docs/changelog_link.md
@@ -0,0 +1,2 @@
+```{include} ../CHANGELOG.md
+```
\ No newline at end of file
diff --git a/docs/conf.py b/docs/conf.py
new file mode 100644
index 0000000..164284d
--- /dev/null
+++ b/docs/conf.py
@@ -0,0 +1,41 @@
+# Configuration file for the Sphinx documentation builder.
+
+# -- Project information -----------------------------------------------------
+
+project = 'kailo-beewell-dashboard'
+copyright = '2024, Amy Heather'
+author = 'Amy Heather'
+
+# -- General configuration ---------------------------------------------------
+
+extensions = [
+ 'myst_parser', # To use markdown as well as reStructuredText
+ 'autoapi.extension' # Auto generate module and function documentation
+]
+
+# Location of files for auto API
+autoapi_dirs = ['../kailo_beewell_dashboard']
+
+# File types for documentation
+source_suffix = ['.rst', '.md']
+
+templates_path = ['_templates']
+
+# Location of toctree
+master_doc = 'contents'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This pattern also affects html_static_path and html_extra_path.
+exclude_patterns = ['_build', '_templates', 'Thumbs.db', '.DS_Store']
+
+language = 'English'
+
+# -- Options for HTML output -------------------------------------------------
+
+html_theme = 'sphinx_rtd_theme'
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
diff --git a/docs/contents.rst b/docs/contents.rst
new file mode 100644
index 0000000..566943e
--- /dev/null
+++ b/docs/contents.rst
@@ -0,0 +1,14 @@
+Site contents
+*************
+
+.. toctree::
+
+ Home
+ environments
+ package_maintenance
+ authentication
+ hosting_data
+ standard_survey_scores
+ streamlit_community_cloud
+ changelog_link
+ Documentation for modules and functions (API Reference)
\ No newline at end of file
diff --git a/docs/environments.md b/docs/environments.md
new file mode 100644
index 0000000..8af6338
--- /dev/null
+++ b/docs/environments.md
@@ -0,0 +1,25 @@
+# Environments
+
+This package and the accompanying dashboard repositories use virtual environments. This page contains tips and advice relating to these, in case they are unfamiliar to you.
+
+## Virtual environment wrapper
+
+I recommend using virtualenvwrapper to manage your python virtual environments. This is because:
+* It stores all your environments in one place (so you can easily see a list of your environments)
+* The syntax for deleting an environment prevents you from accidentally deleting folders, as it has a specify command, rather than a general delete command that could apply to an environment or a folder with the same name
+
+To do so, you'll need to have `pip`, `python`, `virtualenv` and `virtualenvwrapper` installed on your machine.
+
+Commands:
+* Create environment - `mkvirtualenv env_kailo_dashboards`
+* Enter environment - `workon env_kailo_dashboards`
+* Install requirements into environment - `pip install -r requirements.txt`
+* See list of all available environments - `workon`
+* List contents of active environment - `pip list`
+* Delete environment - `rmvirtualenv env_kailo_dashboards`
+
+## Streamlit Community Cloud
+
+Streamlit Community Cloud only appears to work with virtual environment - states compatability with environment.yml but failed when I and my colleague attempted it.
+
+Therefore, we us a virtual environment with the requirements.txt file provided and python version 3.9.12 (with community cloud set up on python 3.9).
\ No newline at end of file
diff --git a/docs/hosting_data.md b/docs/hosting_data.md
new file mode 100644
index 0000000..2e7fecb
--- /dev/null
+++ b/docs/hosting_data.md
@@ -0,0 +1,204 @@
+# Hosting data
+
+The datasets used to produced the dashboard will technically be anonymised data, but we do not want them to be easily downloadable and stored directly in the GitHub repository. Therefore, we have to connect to an external data source.
+
+Below is a step-by-step guide on how this was set-up with TiDB Cloud. At the end are notes from when I explored some of the other options (but didn't end of pursuing).
+
+## How to link to data hosted in TiDB Cloud from Streamlit
+
+### Part 1. Set up TIDB Cloud
+
+1. If not already doing so, make sure that when saving the Python DataFrames to CSV files, you include `na_rep='NULL'` in `df.to_csv()`. Otherwise, you will encounter issues with SQL struggling to parse Python's null values.
+2. Create a TiDB Cloud account - I used the Kailo BeeWell DSDL Google account
+3. You'll have Cluster0 automatically created and in account. Click on the cluster, then go to Data > Import and drag and drop a csv file.
+ * Location will be "Local" as we upload from our computer.
+ * Set the database (synthetic_standard_survey) and table name (I chose to match filename e.g. overall_counts).
+ * For **aggregate_responses** and **aggregate_scores_rag** and **aggregate_demographic**, set **counts** and **percentages** columns to **VARCHAR(512)**. This means they are read as strings and ensures exact match to CSV (e.g. 0.0 rather than 0), and avoids errors relating to NaN in the lists.
+ * If you are **replacing a file**, you'll need to delete it first, otherwise it will append new rows to the existing table. To do this, go to 'Chat2Query', and run `DROP TABLE table_name;`, before then going to 'Import' and uploading the file.
+ * I found some unusual behaviour when replacing one of the tables, where using the same name as before, it was doing something (modifying order maybe, not clear) causing it to not match the CSV file - this was resolved by deleting the back-ups
+
+### Part 2. Link Streamlit to TiDB Cloud
+
+I have used and explained two methods - one that only worked on my local machine, and one that worked both on my local machine and when deployed on Streamlit Community Cloud.
+
+Method that is **compatible** with Streamlit Community Cloud:
+
+1. Add **pymysql** to requirements.txt and update enviornment
+2. On TiDB Cloud, go to the cluster overview and click the "Connect" blue button in the top right corner. On the pop-up, click:
+ * "Generate Password" - make a record of that password
+ * "Download the CA Cert" - to get the .pem file
+
+3. Copy the password and parameters from that pop-up into the .streamlit/secrets.toml file - example:
+```
+[tidb]
+dialect = 'mysql'
+driver = 'pymysql'
+host = ''
+port = ''
+database = ''
+username = ''
+password = ''
+root_cert = ''''''
+```
+
+4. Likewise copy this information into the deployed app's secrets. To do this, open https://share.streamlit.io/ and go to the Settings of dashboard, then click on the Secrets tab, and paste it into there.
+
+5. To import the data within the Streamlit app, follow the code below. It is simpler to use pd.read_sql() instead of producing the get_df() function, but read_sql() returns an error message as we haven't set this up with SQLAlchemy - and so would likely need to modify connection so it is based on that and a connection string, rather than using pymysql.connect(), if we wanted to use read_sql().
+```
+from tempfile import NamedTemporaryFile
+import pymysql
+
+def get_df(query, conn):
+ '''
+ Get data from the connected SQL database
+
+ Parameters:
+ -----------
+ query : string
+ SQL query
+ conn : connection object
+ Connection to the SQL database
+
+ Returns:
+ --------
+ df : pandas DataFrame
+ Dataframe produced from the query
+ '''
+ cursor = conn.cursor()
+ cursor.execute(query)
+ columns = [desc[0] for desc in cursor.description]
+ df = pd.DataFrame(cursor.fetchall(), columns=columns)
+ return df
+
+
+# Create temporary PEM file for setting up the connection
+with NamedTemporaryFile(suffix='.pem') as temp:
+
+ # Write the temporary file
+ temp.write(st.secrets.tidb.root_cert.encode('utf-8'))
+
+ # Temporary file have pointer to current position in file - as we have
+ # just written, the pointer is at the end of the last write, so if you
+ # don't seek, you would read from the end of the file and find nothing
+ temp.seek(0)
+
+ # Set up connection manually, providing the temporary PEM file
+ # (as cannot use st.connection() without providing tempfile name in secrets)
+ conn = pymysql.connect(
+ host = st.secrets.tidb.host,
+ user = st.secrets.tidb.username,
+ password = st.secrets.tidb.password,
+ database = st.secrets.tidb.database,
+ port = st.secrets.tidb.port,
+ ssl_verify_cert = False,
+ ssl_verify_identity = False,
+ ssl_ca = temp.name
+ )
+
+ scores = pd.get_df('SELECT * FROM aggregate_scores;', conn)
+```
+
+Method that was **not** compatible with Streamlit Community Cloud (due to issues with environment not being built due to mysqlclient):
+
+1. Add **mysqlclient** and **SQLAlchemy** to requirements.txt and update environment. In order to install mysqlclient on Linux, as on the mysqlclient [GitHub page](https://github.com/PyMySQL/mysqlclient), I had to first run `sudo apt-get install python3-dev default-libmysqlclient-dev build-essential pkg-config`
+2. As above, get the password and details for the secrets file, but this time structured differently:
+```
+[connections.tidb]
+dialect = "mysql"
+host = ""
+port = 4000
+database = ""
+username = ""
+password = ""
+```
+
+3. To import the data within the Streamlit app:
+```
+conn = st.connection('tidb', type='sql')
+df = conn.query('SELECT * from mytablename;')
+```
+
+## Other hosting options that didn't work out
+
+### MongoDB
+
+**Incomplete - requires you to install software to upload data, and I was hoping for online user interface.**
+
+On MongoDB site:
+1. Create MongoDB account - https://account.mongodb.com/account/register - using Kailo BeeWell DSDL Google account
+2. On start up, deploy M0 free forever database (512 MB storage, shared RAM, shared vCPU) - using AWS, eu-west-1 Ireland, cluster kailo
+3. Created a user
+4. Set connection from My Local Environment and add my IP address (done automatically)
+5. Navigated to Database Deployments page, clicked on the cluster name (kailo), selected the "collections" tab
+6. Select load sample dataset
+
+On Python:
+1. Add pymongo (4.6.1) to requirements.txt and re-installed
+2. Create .streamlit/secrets.toml with contents:
+```
+db_username = ''
+db_pswd = ''
+cluster_name = ''
+```
+3. In the Python streamlit pages, add this code:
+```
+from pymongo import MongoClient
+
+# Initialise connection, using cache_resource() so we only need to run once
+@st.cache_resource()
+def init_connection():
+ return MongoClient(f'''mongodb+srv://{st.secrets.db_username}:{st.secrets.db_pswd}@{st.secrets.cluster_name}.2ebtoba.mongodb.net/?retryWrites=true&w=majority''')
+
+client = init_connection()
+
+@st.cache_data(ttl=60)
+def get_data():
+ db = client.sample_guides #establish connection to the 'sample_guide' db
+ items = db.planets.find() # return all result from the 'planets' collection
+ items = list(items)
+ return items
+data = get_data()
+
+st.markdown(data[0])
+```
+
+Options for import of CSV data to MongoDB:
+* MongoDBCompass
+* mongoimport tool
+* MongoDB Shell
+* MongoDB Drivers
+
+All require you to install software though.
+
+### Firestore
+
+**Incomplete - requires installation of paid software to upload documents or iterating over files which seems needlessly complex.**
+
+1. Add google-cloud-firestore==2.14.0 to requirements.txt and remake environment
+2. Sign in to kailobeewell DSDL google account
+3. Go to https://console.firebase.google.com/ and click "Create project"
+4. Project name "kailo-synth-standard-school", accepted terms, disabled google analytics
+5. Click on "Cloud Firestore", then "create a database"
+6. Set location of europe-west2 (London) and start in test mode (open, anyone can read/write, will change later)
+7. There is not anyway to import data using the Firebase console - https://medium.com/@xathis/import-csv-firebase-firestore-without-code-gui-tool-3987923947b6 - appears you have to install seperate software or write a script that parses the file, iterates over rows and creates document
+
+### Private Google Sheet
+
+**Incomplete - requires Google Cloud account**
+
+I have used private Google sheets following this tutorial: https://docs.streamlit.io/knowledge-base/tutorials/databases/private-gsheet. I have also explained each step below.
+
+1. Add st-gsheets-connection 0.0.3 to the Python environment (I found dependency incompatability with latest versions of pandas and geopandas, so I had to downgrade both - geopandas to 0.14.2 and pandas to 1.5.3).
+2. If not already, add your data to Google Sheets.
+3. If not already, create an account on the Google Cloud Platform and login (https://cloud.google.com/). We'll use the google cloud storage and google sheets API which are both available on the free tier.
+4. Go to the APIs & Services dashboard.
+
+### Deta Space
+
+**Incomplete - requires developer mode, which I requested but was never granted.**
+
+1. Create account on Deta Space
+2. On Horizon, click the purple circle > add card to horizon > shortcut > collections and drag onto space
+3. Open Collections app and then create a new collection
+4. On that collection, go to collection settings then create new data key button, give the key a name, and click generate
+5. Requires developer mode, had to complete questionnaire and request it
\ No newline at end of file
diff --git a/docs/index.rst b/docs/index.rst
new file mode 100644
index 0000000..72809da
--- /dev/null
+++ b/docs/index.rst
@@ -0,0 +1,15 @@
+Documentation for kailo-beewell-dashboard
+=========================================
+
+**kailo-beewell-dashboard** is a Python package with functions that are used in
+the creation of the various #BeeWell survey dashboards for Kailo. They have
+been compiled into a single package to prevent code duplication between the
+different repositories.
+
+This site contains guides to some of the key processes for set-up and
+maintenance of the dashboard, as well as the automatically generated
+documentation for each of the modules and functions.
+
+.. note::
+
+ This project is under active development.
diff --git a/docs/make.bat b/docs/make.bat
new file mode 100644
index 0000000..32bb245
--- /dev/null
+++ b/docs/make.bat
@@ -0,0 +1,35 @@
+@ECHO OFF
+
+pushd %~dp0
+
+REM Command file for Sphinx documentation
+
+if "%SPHINXBUILD%" == "" (
+ set SPHINXBUILD=sphinx-build
+)
+set SOURCEDIR=.
+set BUILDDIR=_build
+
+%SPHINXBUILD% >NUL 2>NUL
+if errorlevel 9009 (
+ echo.
+ echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
+ echo.installed, then set the SPHINXBUILD environment variable to point
+ echo.to the full path of the 'sphinx-build' executable. Alternatively you
+ echo.may add the Sphinx directory to PATH.
+ echo.
+ echo.If you don't have Sphinx installed, grab it from
+ echo.https://www.sphinx-doc.org/
+ exit /b 1
+)
+
+if "%1" == "" goto help
+
+%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
+goto end
+
+:help
+%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
+
+:end
+popd
diff --git a/docs/package_maintenance.md b/docs/package_maintenance.md
new file mode 100644
index 0000000..aa24d61
--- /dev/null
+++ b/docs/package_maintenance.md
@@ -0,0 +1,67 @@
+# Package maintenance
+
+## Testing
+
+Whilst working on the dashboards, you can use a live version of the package to test new features. To do this, set up your virtual environment with requirements.txt file containing `-e ../kailo_beewell_dashboard_package`.
+
+This will import a live version of your local package, assuming that your dashboard folder is sister directories with the package folder.
+
+When running your streamlit site with `streamlit run Home.py`, it will use the package version at the point you ran that command, so you will need to re-run that to get any updates from the package.
+
+## Documentation
+
+If you create any new functions - or modify existing functions - you should create or modify the docstrings accordingly. Docstrings for this package and the accompanying dashboard repositories are formatted based on the [numpy docstring style guide](https://numpydoc.readthedocs.io/en/latest/format.html).
+
+Package documentation is created using Sphinx and hosted on Read the Docs (which automatically updates with GitHub pushes). You can preview updated documentation locally by running:
+1. `make clean`
+2. `make html`
+
+You can then view the local documentation by opening the file `docs/_build/html/index.html` in your browser.
+
+## Linting
+
+Whilst coding, you should be linting your .py and .ipynb files.
+* Use the `Flake8` VS Code extension to lint your .py files
+* Lint .ipynb files from the terminal by running `nbqa flake8 notebook.ipynb`
+
+## Updating data on TiDB Cloud
+
+If you have made changes to the processing steps that produce the aggregated data, you'll need to make sure that the updated csv files are uploaded to TiDB Cloud, replacing the previous data frames.
+
+## Publishing a new version
+
+When you are ready to publish a new version of this package to PyPI, these are the recommended steps you should go through.
+
+1. **Test all dashboards** using the latest version of the package functions, by importing live version of package (`-e ../kailo_beewell_dashboard_package`) before proceeding
+2. **Update version number** using [Semantic Versioning](https://semver.org/spec/v2.0.0.html) in:
+ * `__init__.py`
+ * `CITATION.cff`
+ * `README.md` PyPI package badge
+ * `README.md` Harvard citation
+ * `README.md` Latex citation - and add the new date for the latest version
+3. **Update changelog** (`CHANGELOG.md`) with new version, detailing:
+ * Upload date
+ * Contributors
+ * Short section (one or two sentences) summarising changes
+ * Detailed section with changes, with formatting based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/) - i.e. possible titles of 'added', 'changed', 'deprecated', 'removed', 'fixed', or 'security'
+4. **Push to main** on GitHub, and switch to the main branch
+5. **Upload to PyPI** for which you need to:
+ * a) Delete the existing `dist/` folder
+ * b) Run `python setup.py sdist bdist_wheel`
+ * c) Run `twine upload --skip-existing --repository-url https://upload.pypi.org/legacy/ dist/*`
+6. **Create GitHub release**
+7. **Update package version on community cloud** by:
+ * a) Updating package version in requirements.txt for each dashboard repository
+ * b) Pushing changes to main
+ * c) Rebooting dashboards on [https://share.streamlit.io/](https://share.streamlit.io/)
+
+The documentation is hosted with Read the Docs. If any changes were made to the package documentation, this should be automatically updated from your latest GitHub push.
+
+
+
+## New contributors
+
+If new contributors join the project, substantially contributing to the package, then you should update the citations accordingly in:
+* `README.md` ORCID badges
+* `README.md` citation
+* `CITATION.cff` file
\ No newline at end of file
diff --git a/docs/standard_survey_scores.md b/docs/standard_survey_scores.md
new file mode 100644
index 0000000..36240a0
--- /dev/null
+++ b/docs/standard_survey_scores.md
@@ -0,0 +1,51 @@
+# Standard #BeeWell survey scores
+
+This table summarises the scoring for each topic in the standard #BeeWell survey (including the directionality and range of scores for the questions themselves, and for the overall topic score).
+
+**Topic scores all positive:** I have converted all topic scores to have a higher score being the more positive outcome. This was to help simplify the presentation of information. If you have a mix of high scores being positive and negative, then describing results as "above average" or "below average" is confusing. I considered a range of options for this issue, and ultimately decided on converting all to positive. However, considerations included:
+* Phrasing as "better than average" or "worse than average" (too lengthy for summary page, without shrinking font)
+* Phrasing as "more positive" or "more negative" (misses the key component about relationship to average)
+* Phrasing as "above average" or "below average" and then explaining that above average can mean a high or low score (feels really confusing)
+* Converting all to positive direction and then phrasing as "above average" or "below average" and explaining higher score is more positive nad lower score is more negative (feels like the simplest answer)
+
+This does however mean that you need to be wary of directly comparing scores between GM and Devon, as GM use a mix of directions.
+
+**Further information:** To see exactly how scores were calculated, please see the relevant sections of Python code in this repository.
+
+**Abbreviations:**
+* GM (Greater Manchester) - referring to their dashboards from implementation of #BeeWell.
+* exc. (exclude) - often for responses that would exclude from scoring like "don't know" or "unsure"
+
+*You may need to scroll across to view the full table*
+
+| Topic | Questions used in topic score | Question response range | Implications of max question score | Topic score range | Implications of max topic score | "Meaning" of max topic score | Comments |
+| --- | --- | --- | --- | --- | --- | --- | --- |
+| Autonomy | I feel pressured in my life
I generally feel free to express my ideas and opinions
I feel like I am free to decide for myself how to live my life
In my daily life I often have to do what I am told
I feel I can pretty much be myself in daily situations
I have enough choice about how I spend my time | 1 (completely not true) to 5 (completely true) | Mixed - 4 positive questions, 2 negative questions | 6 to 30 | Positive | Higher levels of autonomy | Matches GM |
+| Life satisfaction | Overall, how satisfied are you with your life nowadays? | 0 (not at all) to 10 (completely) | Positive | 0 to 10 | Positive | Higher levels of life satisfaction | Matches GM |
+| Optimism | I am optimistic about my future
In uncertain times, I expect the best
I think good things are going to happen to me
I believe that things will work out, no matter how difficult they seem | 1 (almost never) to 5 (always), and 1 (not at all like me) to 5 (very much like me) | Positive | 4 to 20 | Positive | Higher levels of optimism | Matches GM |
+| Psychological wellbeing | I've been feeling optimistic about the future
I've been feeling useful
I've been feeling relaxed
I've been dealing with problems well
I've been thinking clearly
I've been feeling close to other people
I've been able to make up my own mind about things | 1 (none of the time) to 5 (all of the time) | Positive | 7 to 35 | Positive | Higher levels of psychological wellbeing | Matches GM |
+| Self-esteem | I feel good about myself
I feel that I have a number of good qualities
On the whole, I am satisfied with myself
I am a person of value
I am able to do things as well as most other people | 1 (strongly agree) to 4 (strongly disagree) | Negative | 5 to 20 | Positive | Higher levels of self-esteem | Matches GM |
+| Stress and coping | Felt you were unable to control the important things in your life
Felt that difficulties were piling up so high that you could not overcome them
Felt confident about your ability to handle your personal problems
Felt that things were going your way | 1 (never) to 5 (very often) | Mixed - 2 positive, 2 negative | 0 to 16 | Positive | Lower levels of perceived stress | Reversed from GM |
+| Feelings around appearance | How happy are you with your appearance (the way that you look)? | 0 (very unhappy) to 10 (very happy) and 11 (prefer not to say) | Positive (exc. prefer not to say) | 0 to 10 | Positive | Higher levels of happiness about appearance | Only use first appearance question as that is the key focus whilst second ("My appearance affects how I feel about myself") is more "deeper" understanding |
+| Negative affect | I feel scared
Nobody likes me
I have problems sleeping
I worry a lot
I worry when I am at school
I am shy
I cry a lot
I am unhappy
I feel lonely
I wake up in the night | 1 (never) to 3 (always) | Negative | 0 to 20 | Positive | Less negative affect | Reversed from GM
+| Loneliness | How often do you feel lonely? | 1 (often or always) to 5 (never) | Positive | 1 to 5 | Positive | Lower levels of loneliness | Reversed from GM |
+| Supporting own wellbeing | I have ways to support myself (e.g. to cope, or help myself feel better)
I know where to look for advice on how to support myself | 1 (strongly agree) to 4 (strongly disagree) | Negative | 2 to 8 | Positive | More able to support self | - |
+| Sleep | Is the amount of sleep you normally get enough for you to feel awake and concentrate on your school work during the day? | 0 (no) and 1 (yes) | Positive | 0 to 1 | Positive | Feel they get enough sleep | GM have question but don't score |
+| Physical activity | How many days in a usual week are you physically active?
How long on average do you spend being physically active? | 0 to 7 (0 to 7 days) and 30 to 120 (around 30 minutes, to around 2 hours or more) | Positive | 0 to 840 | Positive | Higher levels of physical activity| GM have question but don't score |
+| Free time | How often can you do things that you like in your free time? | 1 (almost always) to 5 (almost never) | Negative | 1 to 5 | Positive | More often able to do things they like | GM have question but don't score |
+| Social media use | On a normal weekday during term time, how much time do you spend on social media? | 1 (none) to 9 (7 hours or more) | Negative | 0 to 8 | Positive | Less time spent on social media | Reversed from GM
Doesn't use types of social media, as overall time felt like key thing |
+| Places to go and things to do | How many activities/places are there in your local area, that you choose to or would want to go to in your free time? | 1 (none) to 4 (lots) | Positive | 1 to 4 | Positive | Feels there are more activities/places that would choose to or want to go to | Doesn't use barriers as that is a "select all that apply" question |
+| Talking about feelings | Repeated for adult at school, one of parents/carers, and another person you age...
Did you feel listened to when you spoke with...
Did you receive advice that you found helpful from...
How would you feel about speaking with... | 1 (not at all) to 4 (fully)
1 (not helpful) to 3 (very helpful)
1 (very uncomfortable) to 4 (very comfortable) | Positive | 3 to 12 | Positive | More positive about talking with others about feeling down | Doesn't include the branching question (if have ever talk with...) in the scoring |
+| Acceptance | Do you feel accepted as you are by... (adults at you school, your parents/carers, people in your local area, other people your age) | 1 (not at all) to 4 (fully) | Positive | 4 to 16 | Positive | Higher levels of acceptance by others | - |
+| School connection | I feel that I belong/belonged at my school | 1 (not at all) to 5 (a lot) | Positive | 1 to 5 | Positive | Higher levels of school connection | Matches GM |
+| Support from staff | At school there is an adult who...
... is interested in my schoolwork
... believes that I will be a success
... wants me to do my best
... listens to me when I have something to say | 1 (never) to 5 (always) | Positive | 4 to 20 | Positive | Higher levels of perceived support | Matches GM |
+| Support from parents/carers | At home there is an adult who...
... is interested in my schoolwork
... believes that I will be a success
... wants me to do my best
... listens to me when I have something to say | 1 (never) to 5 (always) | Positive | 4 to 20 | Positive | Higher levels of perceived support | Matches GM |
+| Home environment | How happy are you with the home that you live in? | 0 (very unhappy) to 10 (very happy) | Positive | 0 to 10 | Positive | Higher levels of happiness with the home environment | Matches GM |
+| Local environment | How safe do you feel when in your local area?
There are good places to spend your free time (e.g., leisure centres, parks, shops)
People around here support each other with their wellbeing
You can trust people around here
I could ask for help or a favour from neighbours | 1 (very safe) to 4 (very unsafe), and 5 (don't know)
1 (strongly agree) to 5 (strongly disagree) | Negative (exc. don't know) | 5 to 25 | Positive | More positive feelings about local environment | GM have question but don't score |
+| Discrimination | How often do people make you feel bad because of...
... your race, skin colour or where you were born?
... your gender?
... your sexual orientation?
... disability?
... your religion/faith? | 1 (often or always) to 5 (never) | Positive | 1 to 2 | Positive | Experiencing less discrimination | GM have question but don't score
Binary as felt identifying YP experiencing any form of discrimination is important, and discrimination for one reason is not necessarily lesser than for three reasons |
+| Local connection | I feel like I belong in my local area | 1 (strongly agree) to 4 (strongly disagree) | Negative | 1 to 4 | Positive | Feeling more like they belong | - |
+| Relative wealth | Compared to your friends, is your family richer, poorer or about the same? | 1 (richer), 2 (poorer), 3 (about the same), 4 (don't know) | Mixed (1+2 negative, 3 positive, 4 exc.) | 0 to 1 | Positive | Feeling similar levels of wealth compared with friends | - |
+| Future opportunities | How many options are available?
How do you feel about the options available?
Do you feel (or think you would feel) supported to explore options that interest you, even if no-one else around you has done them before? | 1 (not many) to 3 (alot) and 4 (unsure)
1 (not interested) to 4 (very interested) and 5 (unsure)
1 (not at all) to 4 (fully) and 5 (unsure) | Positive (exc. unsure) | 3 to 12 | Positive | Feeling more positive about future opportunities | - |
+| Climate change | How often do you worry about the impact of climate change on your future? | 1 (often) to 4 (never) | Positive (if assuming that's a bad thing) | 1 to 4 | Positive | Less often worrying about climate change | - |
+| Support from friends | I get along with people around me
People like to spend time with me
I feel supported by my friends
My friends care about me when times are hard (for example if I am sick or have done something wrong) | 1 (not at all) to 5 (a lot) | Positive | 4 to 20 | Positive | Higher levels of perceived social support | Matches GM |
+| Bullying | How often do you get physically bullied at school? By this we mean getting hit, pushed around, threatened, or having belongings stolen.
How often do you get bullied in other ways at school? By this we mean insults, slurs, name calling, threats, getting left out or excluded by others, or having rumours spread about you on purpose.
How often do you get cyber-bullied? By this we mean someone sending mean text or online messages about you, creating a website making fun of you, posting pictures that make you look bad online, or sharing them with others. | 1 (not at all) to 4 (a few times a week) | Negative | 3 to 12 | Positive | Less bullying | Matches GM
diff --git a/docs/streamlit_community_cloud.md b/docs/streamlit_community_cloud.md
new file mode 100644
index 0000000..04856df
--- /dev/null
+++ b/docs/streamlit_community_cloud.md
@@ -0,0 +1,16 @@
+# Streamlit Community Cloud
+
+Our dashboards are hosted using Streamlit Community Cloud.
+
+The app should be built based on the main branch in the GitHub repository. To push changes to the app from 'main':
+* Go to [https://share.streamlit.io/](https://share.streamlit.io/).
+* Login using your GitHub account and then - assumiung you have the appropriate access permissions - will be able to see the applications hosted as part of the kailo-beewell organisation.
+* Go on the settings for a dashboard (three dots) and select 'Reboot' to recreate the app based on the latest version of your main branch.
+
+You'll need to make sure that the secrets stored for that app are up-to-date, matching the `secrets.toml` you have in your local directory.
+
+## Packages.txt
+
+The app is built using the `requirements.txt` (dependencies as in virtual environment) and `packages.txt` files provided in its GItHub repository.
+
+`Packages.txt` is used to manage any external dependencies. Streamlit Community Cloud runs on Linux, so these would be the same as the Linux dependencies that you would install with `apt-get` outside the Python environment. For this project, `libpangocairo-1.0-0` is included in `packages.txt`.
\ No newline at end of file
diff --git a/kailo_beewell_dashboard/__init__.py b/kailo_beewell_dashboard/__init__.py
index f2bd1fa..de5f356 100644
--- a/kailo_beewell_dashboard/__init__.py
+++ b/kailo_beewell_dashboard/__init__.py
@@ -4,5 +4,5 @@
Tools to support creation of #BeeWell survey dashboards for Kailo.
'''
-__version__ = '0.1.0'
+__version__ = '0.2.0'
__author__ = 'Amy Heather'
diff --git a/kailo_beewell_dashboard/authentication.py b/kailo_beewell_dashboard/authentication.py
index 8231684..6c4660d 100644
--- a/kailo_beewell_dashboard/authentication.py
+++ b/kailo_beewell_dashboard/authentication.py
@@ -51,13 +51,24 @@ def password_entered():
st.session_state['password_correct'] = False
-def login_screen():
+def login_screen(survey_type):
'''
Produces message that is displayed on the login screen.
+
+ Parameters
+ ----------
+ survey_type : string
+ Specifies whether this is for the 'standard' or 'symbol' survey.
'''
+ # Login screen title
st.title('The #BeeWell survey')
- st.markdown('''
-Please enter your school username and password to login to the dashboard.
+
+ # Compose message, with content depending on dashboard
+ message = '''
+Please enter your school username and password to login to the dashboard.'''
+
+ if survey_type == 'standard':
+ message += '''
For this synthetic dashboard, we have six schools - choose a username and
password from the following:
@@ -67,19 +78,35 @@ def login_screen():
* '**schoold**' and '**schooldpassword**'
* '**schoole**' and '**schoolepassword**'
* '**schoolf**' and '**schoolfpassword**' - no Year 10s
-''')
+'''
+ elif survey_type == 'symbol':
+ message += '''
+
+For this synthetic dashboard, we have two schools - choose a username and
+password from the following:
+* '**schoola**' and '**schoolapassword**'
+* '**schoolb**' and '**schoolbpassword**'
+'''
+
+ # Print message onto the dashboard
+ st.markdown(message)
-def check_password():
+def check_password(survey_type):
'''
Function that returns 'True' if the user has entered the correct password
Stores the user to the session state, and finds the school's full name,
and adds that to the session state as well.
+
+ Parameters
+ ----------
+ survey_type : string
+ Specifies whether this is for the 'standard' or 'symbol' survey.
'''
# If have not yet logged in...
if 'password_correct' not in st.session_state:
# Show inputs for username and password
- login_screen()
+ login_screen(survey_type)
st.text_input('Username', key='username')
st.text_input('Password', type='password', key='password')
st.write('')
@@ -87,7 +114,7 @@ def check_password():
return False
elif not st.session_state['password_correct']:
# Password not correct, show input boxes again and an error message
- login_screen()
+ login_screen(survey_type)
st.text_input('Username', key='username')
st.text_input('Password', type='password', key='password')
st.write('')
diff --git a/kailo_beewell_dashboard/bar_charts.py b/kailo_beewell_dashboard/bar_charts.py
index 1bab0bd..f45744b 100644
--- a/kailo_beewell_dashboard/bar_charts.py
+++ b/kailo_beewell_dashboard/bar_charts.py
@@ -7,11 +7,52 @@
import plotly.express as px
import streamlit as st
from .convert_image import convert_fig_to_html
+from .grammar import lower_first
-def survey_responses(dataset, font_size=16, output='streamlit', content=None):
+def create_group_list(drop, page='explore'):
'''
- Create bar charts for each of the quetsions in the provided dataframe.
+ Creates list of the pupil groups who have been excluded as n<10. This is
+ provided as a seperation function as wanted to create string depending on
+ three use cases - with the example of year group...:
+ (1) 'Year 7' pupils
+ (2) 'Year 7 and Year 9' pupils
+ (3) 'Year 7, Year 8 and Year 9' pupils (or longer)
+
+ Parameters
+ ----------
+ drop : series
+ Contains the names of the groups that had less than 10 responses
+ page : string
+ Specifies whether this is for the 'explore' page or 'demographic' page.
+ Default is the 'explore' page.
+ '''
+ # Convert to list (if not already)
+ drop = drop.to_list()
+
+ # Convert first letter to lower case (unless all are upper case)
+ drop = [lower_first(item) for item in drop]
+
+ # Generate string with appropriate grammar
+ if len(drop) == 1:
+ string = drop[0]
+ elif len(drop) == 2:
+ string = f'{drop[0]} and {drop[1]}'
+ elif len(drop) >= 3:
+ string = f'''{drop[0]}, {', '.join(drop[1:-1])} and {drop[-1]}'''
+
+ # Add pupils with appropriate grammer
+ if page == 'explore':
+ string = f'{string} pupils'
+ elif page == 'demographic':
+ string = f'pupils at {string}'
+ return string
+
+
+def survey_responses(dataset, font_size=16, output='streamlit', content=None,
+ page='explore'):
+ '''
+ Create bar charts for each of the questions in the provided dataframe.
The dataframe should contain questions which all have the same set
of possible responses.
@@ -26,6 +67,9 @@ def survey_responses(dataset, font_size=16, output='streamlit', content=None):
Must be either 'streamlit' or 'pdf, default is 'streamlit.
content : list
Optional input used when output=='pdf', contains HTML for report.
+ page : string
+ Specifies whether this is for the 'explore' page or 'demographic' page.
+ Default is the 'explore' page.
Returns
-------
@@ -47,7 +91,7 @@ def survey_responses(dataset, font_size=16, output='streamlit', content=None):
# Don't use in-built plotly title as that overlaps the legend if it
# spans over 2 lines
if output == 'streamlit':
- st.markdown(f'**{measure}**')
+ st.markdown(f'**{measure.lstrip()}**')
elif output == 'pdf':
temp_content.append(
f'''{measure}
''')
@@ -59,30 +103,33 @@ def survey_responses(dataset, font_size=16, output='streamlit', content=None):
# groups are, remove it from dataframe and print explanation
mask = df['cat_lab'] == 'Less than 10 responses'
under_10 = df[mask]
- if len(under_10.index) == 1:
- # Remove group from dataframe
- df = df[~mask]
+ # Remove group from dataframe for plotting
+ df = df[~mask]
+
+ # If there were some with n<10 but still some left to plot...
+ if len(under_10.index) > 0 and len(df.index) > 0:
# Create explanation
- dropped = np.unique(under_10['group'])[0]
- kept = np.unique(df['group'])[0]
+ dropped = create_group_list(under_10['group'], page)
+ kept = create_group_list(df['group'].drop_duplicates(), page)
explanation = f'''
-There were less than 10 responses from {dropped} pupils so results are just
-shown for {kept} pupils.'''
- elif len(under_10.index) == 2:
- unique_groups = np.unique(df['group'])
+There were less than 10 responses from {dropped} so results are just
+shown for {kept}.'''
+ # Else if all groups were removed and nothing left to plot...
+ elif len(df.index) == 0:
+ dropped = create_group_list(under_10['group'], page)
explanation = f'''
-There were less than 10 responses from {unique_groups[0]} pupils and from
-{unique_groups[1]} pupils, so no results can be shown.'''
+There were less than 10 responses from {dropped}, so no results can
+be shown.'''
# Print explanation on page for the removal of n<10 overall
- if len(under_10.index) > 0:
+ if (len(under_10.index) > 0) or (len(df.index) == 0):
if output == 'streamlit':
st.markdown(explanation)
elif output == 'pdf':
temp_content.append(f'{explanation}
')
# Create plot if there was at least one group without NaN
- if len(under_10.index) < 2:
+ if len(df.index) > 0:
# First, check for any individual categories censored due to
# n<10 (this is relevant to demographic page, the explore
@@ -90,7 +137,7 @@ def survey_responses(dataset, font_size=16, output='streamlit', content=None):
# If there are any rows with NaN...
null_mask = df['count'].isnull()
if sum(null_mask) > 0:
- # Filter to NaN rows amd get the categories as a string
+ # Filter to NaN rows and get the categories as a string
dropped = df.loc[null_mask, ['cat_lab', 'group']]
for school in dropped['group'].drop_duplicates():
dropped_string = ', '.join(dropped.loc[
@@ -182,16 +229,16 @@ def survey_responses(dataset, font_size=16, output='streamlit', content=None):
# PDF: Write image to a temporary PNG file, convert that
# to HTML, and add the image HTML to temp_content
elif output == 'pdf':
-
# Make and add HTML image tag to temp_content
temp_content.append(convert_fig_to_html(
fig=fig, alt_text=measure))
- # Insert temp_content into a div class and add to content
- content.append(f'''
-
- {''.join(temp_content)}
-
''')
+ # Insert temp_content into a div class and add to content
+ if output == 'pdf':
+ content.append(f'''
+
+{''.join(temp_content)}
+
''')
# At the end of the loop, if PDF report, return content
if output == 'pdf':
diff --git a/kailo_beewell_dashboard/create_and_aggregate_data.py b/kailo_beewell_dashboard/create_and_aggregate_data.py
index 7371e69..168d79f 100644
--- a/kailo_beewell_dashboard/create_and_aggregate_data.py
+++ b/kailo_beewell_dashboard/create_and_aggregate_data.py
@@ -273,7 +273,8 @@ def calculate_scores(data):
def results_by_school_and_group(
- data, agg_func, no_pupils, response_col=None, labels=None):
+ data, agg_func, no_pupils, response_col=None, labels=None,
+ survey_type='standard'):
'''
Aggregate results for all possible schools and groups (setting result to 0
or NaN if no pupils from a particular group are present).
@@ -296,6 +297,9 @@ def results_by_school_and_group(
a dictionary with all possible questions as keys, then values are
another dictionary where keys are all the possible numeric (or nan)
answers to the question, and values are relevant label for each answer.
+ survey_type : string
+ Either 'standard' or 'symbol' survey - default is standard - so that
+ appropriate demographic groupings are performed.
Returns
-------
@@ -309,16 +313,29 @@ def results_by_school_and_group(
# Define the groups that we want to aggregate by - when providing a filter,
# first value is the name of the category and the second is the variable
- groups = [
- 'All',
- ['Year 8', 'year_group_lab'],
- ['Year 10', 'year_group_lab'],
- ['Girl', 'gender_lab'],
- ['Boy', 'gender_lab'],
- ['FSM', 'fsm_lab'],
- ['Non-FSM', 'fsm_lab'],
- ['SEN', 'sen_lab'],
- ['Non-SEN', 'sen_lab']]
+ if survey_type == 'standard':
+ groups = [
+ 'All',
+ ['Year 8', 'year_group_lab'],
+ ['Year 10', 'year_group_lab'],
+ ['Girl', 'gender_lab'],
+ ['Boy', 'gender_lab'],
+ ['FSM', 'fsm_lab'],
+ ['Non-FSM', 'fsm_lab'],
+ ['SEN', 'sen_lab'],
+ ['Non-SEN', 'sen_lab']]
+ elif survey_type == 'symbol':
+ groups = [
+ 'All',
+ ['Year 7', 'year_group_lab'],
+ ['Year 8', 'year_group_lab'],
+ ['Year 9', 'year_group_lab'],
+ ['Year 10', 'year_group_lab'],
+ ['Year 11', 'year_group_lab'],
+ ['Girl', 'gender_lab'],
+ ['Boy', 'gender_lab'],
+ ['FSM', 'fsm_lab'],
+ ['Non-FSM', 'fsm_lab']]
# For each of the schools (which we know will all be present at least once
# as we base the school list on the dataset itself)
@@ -353,7 +370,8 @@ def results_by_school_and_group(
res['year_group_lab'] = 'All'
res['gender_lab'] = 'All'
res['fsm_lab'] = 'All'
- res['sen_lab'] = 'All'
+ if survey_type == 'standard':
+ res['sen_lab'] = 'All'
if group != 'All':
res[group[1]] = group[0]
@@ -511,3 +529,66 @@ def aggregate_proportions(data, response_col, labels, hide_low_response=False):
# Combine into a single dataframe and return
return pd.concat(rows)
+
+
+def aggregate_demographic(data, response_col, labels):
+ '''
+ Aggregates the demographic data by school and group (seperate to
+ results_by_school_and_group() as we want to aggregate by school v.s. all
+ others rather than for each school, and as we don't want to break down
+ results any further by any demographic characteristics)
+
+ Parameters
+ ----------
+ data : dataframe
+ Dataframe containing pupil-level demographic data
+ response_col : array
+ List of demographic columns to be aggregated
+ labels : dictionary
+ Dictionary with response options for each variable
+
+ Returns
+ -------
+ result : dataframe
+ Dataframe with % responses to demographic questions, for each school,
+ compared with all other schools
+ '''
+ # Initialise list to store results
+ result_list = list()
+
+ # For each of the schools (which we know will all be present at least once
+ # as we base the school list on the dataset itself)
+ schools = data['school_lab'].dropna().drop_duplicates().sort_values()
+ for school in schools:
+
+ # Add label identifying the school as being the current one or now
+ data['school_group'] = np.where(data['school_lab'] == school, 1, 0)
+
+ # Loop through each of those groups (current school vs. other schools)
+ for group in [1, 0]:
+
+ # Filter to the group and then aggregate the data
+ to_agg = data[data['school_group'] == group]
+ res = aggregate_proportions(
+ data=to_agg, response_col=response_col, labels=labels,
+ hide_low_response=True)
+
+ # Label with the group
+ res['school_lab'] = school
+ res['school_group'] = group
+
+ # Append results to list
+ result_list.append(res)
+
+ # Combine all the results into a single dataframe
+ result = pd.concat(result_list)
+
+ # Hide results where n<10 overall (in addition to item-level already done)
+ result.loc[result['n_responses'] < 10,
+ ['count', 'percentage', 'n_responses']] = np.nan
+
+ # Add labels that can use in figures
+ result['school_group_lab'] = np.where(
+ result['school_group'] == 1, 'Your school', 'Other schools')
+
+ return result
diff --git a/kailo_beewell_dashboard/explore_results.py b/kailo_beewell_dashboard/explore_results.py
index 1175c98..461d065 100644
--- a/kailo_beewell_dashboard/explore_results.py
+++ b/kailo_beewell_dashboard/explore_results.py
@@ -12,7 +12,7 @@
from .score_descriptions import score_descriptions
-def write_page_title(output='streamlit'):
+def write_page_title(output='streamlit', survey_type='standard'):
'''
Writes the title of this page/section (Explore Results), for the streamlit
page or for the PDF report.
@@ -21,6 +21,8 @@ def write_page_title(output='streamlit'):
----------
output : string
Specifies whether to write for 'streamlit' (default) or 'pdf'.
+ survey_type : string
+ Specifies whether this is for the 'standard' or 'symbol' survey.
Returns
-------
@@ -47,7 +49,10 @@ def write_page_title(output='streamlit'):
type2 = 'section'
line_break = '
'
descrip = f'''
-This {type1} allows you to explore the results of pupils at your school.
+This {type1} allows you to explore the results of pupils at your school.'''
+
+ if survey_type == 'standard':
+ descrip += f'''
{line_break} For each survey topic, you can see (a) a breakdown of how pupils
at your school responded to each question in that topic, and (b) a chart
building on results from the 'Summary' {type2} that allows you to understand
@@ -149,8 +154,8 @@ def write_topic_intro(chosen_variable, chosen_variable_lab, df,
# Create description string
topic_descrip = f'''
-These questions are about
-{description[f'{chosen_variable}_score'].lower()}
'''
+These questions are
+about {description[f'{chosen_variable}_score'].lower()}
'''
# Print that description string into streamlit page or PDF report HTML
if output == 'streamlit':
@@ -202,7 +207,8 @@ def write_response_section_intro(
return content
-def get_chosen_result(chosen_variable, chosen_group, df, school):
+def get_chosen_result(chosen_variable, chosen_group, df, school,
+ survey_type='standard'):
'''
Filters the dataframe with responses to each question, to just responses
for the chosen topic, school and group.
@@ -218,6 +224,8 @@ def get_chosen_result(chosen_variable, chosen_group, df, school):
Dataframe with responses to all the questions for all topics
school : string
Name of school to get results for
+ survey_type : string
+ Designates whether this filtering is for 'standard' or 'symbol' survey
Returns
----------
@@ -228,8 +236,9 @@ def get_chosen_result(chosen_variable, chosen_group, df, school):
'''
# Filter by the specified school and grouping
- chosen, group_lab = filter_by_group(df=df, chosen_group=chosen_group,
- output='explore', chosen_school=school)
+ chosen, group_lab = filter_by_group(
+ df=df, chosen_group=chosen_group, output='explore',
+ chosen_school=school, survey_type=survey_type)
# Filter by the chosen variable
chosen = chosen[chosen['group'] == chosen_variable]
@@ -350,6 +359,7 @@ def create_bar_charts(chosen_variable, chosen_result,
multiple_charts = define_multiple_charts()
# Import descriptions for the groups of stacked bar charts
+ # Note: We still import for symbol dashboard but won't use
response_descrip = create_response_description()
# Define which variables need to be reversed - intention was to be mostly
@@ -359,7 +369,7 @@ def create_bar_charts(chosen_variable, chosen_result,
reverse = ['esteem', 'negative', 'support', 'free_like', 'local_safe',
'local_other', 'belong_local', 'bully']
- # Create stacked bar chart with seperate chart groups if required
+ # Create bar chart with seperate chart groups if required
if chosen_variable in multiple_charts:
# Counter as we don't want to break page before first description,
# but do for the later description
diff --git a/kailo_beewell_dashboard/grammar.py b/kailo_beewell_dashboard/grammar.py
new file mode 100644
index 0000000..c6b54ec
--- /dev/null
+++ b/kailo_beewell_dashboard/grammar.py
@@ -0,0 +1,26 @@
+'''
+Function to convert first letter of string to lower case, unless all other
+letters are upper case
+'''
+
+
+def lower_first(string):
+ '''
+ Converts first letter of string to lower case, unless all other letters
+ in the string are uppercase.
+
+ Parameters
+ ----------
+ string : string
+ The string to be modified
+
+ Returns
+ -------
+ new_string : string
+ The modified string
+ '''
+ if string.isupper():
+ new_string = string
+ else:
+ new_string = string[0].lower() + string[1:]
+ return new_string
diff --git a/kailo_beewell_dashboard/import_data.py b/kailo_beewell_dashboard/import_data.py
index 72fcd89..113dc99 100644
--- a/kailo_beewell_dashboard/import_data.py
+++ b/kailo_beewell_dashboard/import_data.py
@@ -4,7 +4,6 @@
import numpy as np
import pandas as pd
import streamlit as st
-from pandas.testing import assert_frame_equal
from tempfile import NamedTemporaryFile
import pymysql
@@ -32,22 +31,31 @@ def get_df(query, conn):
return df
-def import_tidb_data(tests=False):
+def import_tidb_data(survey_type):
'''
Imports all the datasets from TiDB Cloud, fixes any data type issues, and
saves the datasets to the session state.
Parameters
----------
- tests : Boolean
- Whether to run tests to check the data imported from TiDB cloud matches
- the CSV files in the GitHub repository
+ survey_type : string
+ Designates whether to import for 'standard' or 'symbol' survey
'''
+ # Define the session state variables (keys) and TIDB datasets (values)
+ if survey_type == 'standard':
+ items = {'scores': 'standard_school_aggregate_scores',
+ 'scores_rag': 'standard_school_aggregate_scores_rag',
+ 'responses': 'standard_school_aggregate_responses',
+ 'counts': 'standard_school_overall_counts',
+ 'demographic': 'standard_school_aggregate_demographic'}
+ elif survey_type == 'symbol':
+ items = {'responses': 'symbol_school_aggregate_responses',
+ 'counts': 'symbol_school_overall_counts',
+ 'demographic': 'symbol_school_aggregate_demographic'}
+
# First, check if everything is in the session state - if so, don't need to
# connect, but if missing stuff, will want to connect
- items = ['scores', 'scores_rag', 'responses', 'counts', 'demographic']
-
- if not all([x in st.session_state for x in items]):
+ if not all([x in st.session_state for x in items.keys()]):
# Create temporary PEM file for setting up the connection
with NamedTemporaryFile(suffix='.pem') as temp:
@@ -75,57 +83,33 @@ def import_tidb_data(tests=False):
ssl_ca=temp.name
)
- # Scores
- if 'scores' not in st.session_state:
- scores = get_df('SELECT * FROM aggregate_scores;', conn)
- st.session_state['scores'] = scores
-
- # Scores RAG
- if 'scores_rag' not in st.session_state:
- scores_rag = get_df(
- 'SELECT * FROM aggregate_scores_rag;', conn)
- # Convert columns to numeric
- to_fix = ['mean', 'count', 'total_pupils', 'group_n',
- 'group_wt_mean', 'group_wt_std', 'lower', 'upper']
- for col in to_fix:
- scores_rag[col] = pd.to_numeric(scores_rag[col],
+ # Loop through each of the items
+ for key, value in items.items():
+
+ # Check if it is in the session state, and if not...
+ if key not in st.session_state:
+
+ # Import data from TIDB cloud
+ df = get_df(f'SELECT * FROM {value}', conn)
+
+ # If dataset is scores with RAG ratings, convert
+ # columns to numeric, and string 'nan' to actual np.nan
+ if key == 'scores_rag':
+ to_fix = ['mean', 'count', 'total_pupils',
+ 'group_n', 'group_wt_mean', 'group_wt_std',
+ 'lower', 'upper']
+ for col in to_fix:
+ df[col] = pd.to_numeric(df[col], errors='ignore')
+ df['rag'] = df['rag'].replace('nan', np.nan)
+
+ # If dataset is demographic, convert n_responses to numeric
+ if key == 'demographic':
+ df['n_responses'] = pd.to_numeric(df['n_responses'],
+ errors='ignore')
+
+ # If dataset is counts, convert counts to numeric
+ if key == 'counts':
+ df['count'] = pd.to_numeric(df['count'],
errors='ignore')
- # Convert string 'nan' to actual np.nan
- scores_rag['rag'] = scores_rag['rag'].replace('nan', np.nan)
- st.session_state['scores_rag'] = scores_rag
-
- # Responses
- if 'responses' not in st.session_state:
- responses = get_df('SELECT * FROM aggregate_responses;', conn)
- st.session_state['responses'] = responses
-
- # Overall counts
- if 'counts' not in st.session_state:
- counts = get_df('SELECT * FROM overall_counts;', conn)
- counts['count'] = pd.to_numeric(counts['count'],
- errors='ignore')
- st.session_state['counts'] = counts
-
- # Demographic
- if 'demographic' not in st.session_state:
- demographic = get_df(
- 'SElECT * FROM aggregate_demographic;', conn)
- st.session_state['demographic'] = demographic
-
- # Run tests to check whether these match the csv files
- if tests:
- assert_frame_equal(
- st.session_state.scores,
- pd.read_csv('data/survey_data/aggregate_scores.csv'))
- assert_frame_equal(
- st.session_state.scores_rag,
- pd.read_csv('data/survey_data/aggregate_scores_rag.csv'))
- assert_frame_equal(
- st.session_state.responses,
- pd.read_csv('data/survey_data/aggregate_responses.csv'))
- assert_frame_equal(
- st.session_state.counts,
- pd.read_csv('data/survey_data/overall_counts.csv'))
- assert_frame_equal(
- st.session_state.demographic,
- pd.read_csv('data/survey_data/aggregate_demographic.csv'))
+ # Save into the session state
+ st.session_state[key] = df
diff --git a/kailo_beewell_dashboard/page_setup.py b/kailo_beewell_dashboard/page_setup.py
index df5f0cf..9989117 100644
--- a/kailo_beewell_dashboard/page_setup.py
+++ b/kailo_beewell_dashboard/page_setup.py
@@ -30,9 +30,14 @@ def page_logo():
''', unsafe_allow_html=True)
-def page_setup():
+def page_setup(type):
'''
Set up page to standard conditions, with layout as specified
+
+ Parameters
+ ----------
+ type : string
+ Survey type - 'standard' or symbol'
'''
# Set up streamlit page parameters
st.set_page_config(
@@ -40,8 +45,8 @@ def page_setup():
page_icon='š',
initial_sidebar_state='expanded',
layout='centered',
- menu_items={'About': '''
-Dashboard for schools completing the standard version of the #BeeWell survey in
+ menu_items={'About': f'''
+Dashboard for schools completing the {type} version of the #BeeWell survey in
North Devon and Torridge in 2023/24.'''})
# Import CSS style
diff --git a/kailo_beewell_dashboard/reshape_data.py b/kailo_beewell_dashboard/reshape_data.py
index dec16f6..d57ab26 100644
--- a/kailo_beewell_dashboard/reshape_data.py
+++ b/kailo_beewell_dashboard/reshape_data.py
@@ -7,8 +7,8 @@
import numpy as np
-def filter_by_group(df, chosen_group, output,
- chosen_school=None, chosen_variable=None):
+def filter_by_group(df, chosen_group, output, chosen_school=None,
+ chosen_variable=None, survey_type='standard'):
'''
Filter dataframe so just contains rows relevant for chosen group (either
results from all pupils, or from the two chosen groups) and school
@@ -26,6 +26,8 @@ def filter_by_group(df, chosen_group, output,
Optional input, name of a school to filter to as well
chosen_variable : string
Optional input, name of a variable to filter to as well
+ survey_type : string
+ Designates whether this filtering is for 'standard' or 'symbol' survey
Returns
-------
@@ -52,8 +54,12 @@ def filter_by_group(df, chosen_group, output,
# If the chosen group was All, then no changes are made, as this is default
if chosen_group == 'By year group':
group_lab = 'year_group_lab'
- year_group = ['Year 8', 'Year 10']
- order = ['Year 8', 'Year 10']
+ if survey_type == 'standard':
+ year_group = ['Year 8', 'Year 10']
+ order = ['Year 8', 'Year 10']
+ elif survey_type == 'symbol':
+ year_group = ['Year 7', 'Year 8', 'Year 9', 'Year 10', 'Year 11']
+ order = ['Year 7', 'Year 8', 'Year 9', 'Year 10', 'Year 11']
elif chosen_group == 'By gender':
group_lab = 'gender_lab'
gender = ['Girl', 'Boy']
@@ -67,12 +73,13 @@ def filter_by_group(df, chosen_group, output,
sen = ['SEN', 'Non-SEN']
order = ['SEN', 'Non-SEN']
- # Filter to chosen group
+ # Filter to chosen group (exc. SEN filter for symbol survey)
chosen = df[
(df['year_group_lab'].isin(year_group)) &
(df['gender_lab'].isin(gender)) &
- (df['fsm_lab'].isin(fsm)) &
- (df['sen_lab'].isin(sen))]
+ (df['fsm_lab'].isin(fsm))]
+ if survey_type == 'standard':
+ chosen = chosen[chosen['sen_lab'].isin(sen)]
# Filter to chosen school, if relevant
if chosen_school is not None:
@@ -148,7 +155,7 @@ def extract_nested_results(chosen, group_lab, plot_group=False):
return chosen_result
-def get_school_size(counts, school):
+def get_school_size(counts, school, survey_type='standard'):
'''
Get the total pupil number for a given school
@@ -158,6 +165,8 @@ def get_school_size(counts, school):
Dataframe containing the count of pupils at each school
school : string
Name of the school
+ survey_type : string
+ Designates whether this filtering is for 'standard' or 'symbol' survey
Returns
-------
@@ -168,10 +177,11 @@ def get_school_size(counts, school):
school_counts = counts.loc[counts['school_lab'] == school]
# Find total school size
- school_size = school_counts.loc[
- (school_counts['year_group_lab'] == 'All') &
- (school_counts['gender_lab'] == 'All') &
- (school_counts['fsm_lab'] == 'All') &
- (school_counts['sen_lab'] == 'All'), 'count'].values[0].astype(int)
+ df = school_counts[(school_counts['year_group_lab'] == 'All') &
+ (school_counts['gender_lab'] == 'All') &
+ (school_counts['fsm_lab'] == 'All')]
+ if survey_type == 'standard':
+ df = df[df['sen_lab'] == 'All']
+ school_size = df['count'].values[0].astype(int)
return school_size
diff --git a/kailo_beewell_dashboard/response_labels.py b/kailo_beewell_dashboard/response_labels.py
index 156fadd..3a62810 100644
--- a/kailo_beewell_dashboard/response_labels.py
+++ b/kailo_beewell_dashboard/response_labels.py
@@ -4,9 +4,32 @@
'''
+def add_keys(keys, value, dictionary):
+ '''
+ Add multiple keys with the same value to the dictionary
+
+ Parameters
+ -----------
+ keys: array
+ Array with strings that are keys for the dictionary
+ value: string
+ Value of each key
+ dictionary: dictionary
+ To add the keys and values to, and to source them from too
+ '''
+ dictionary.update(dict.fromkeys(keys, dictionary[value]))
+
+
def create_response_label_dict():
'''
- Creates dictionary with labels for each response in each question
+ Creates dictionary with labels for each response in each question in the
+ standard #BeeWell survey and council data.
+
+ Returns
+ -------
+ labels : dictionary
+ Dictionary where key is a topic name or group, and then key is another
+ dictionary where key is numeric answer and value is the label.
'''
# Define the labels to use for different columns
labels = {
@@ -371,58 +394,110 @@ def create_response_label_dict():
}
}
- def add_keys(keys, value, dictionary=labels):
- '''
- Add multiple keys with the same value to the dictionary
- Inputs:
- keys: Array with the keys
- value: String which is the value for all the keys
- dictionary: Dictionary to add the keys and values to, default is labels
- '''
- dictionary.update(dict.fromkeys(keys, labels[value]))
-
# Add values for the keys below, so each key has the same set of values
# (Rather than repeatedly defining them all above)
- add_keys(['birth_parent1', 'birth_parent2', 'birth_you'], 'birth')
+ add_keys(['birth_parent1', 'birth_parent2', 'birth_you'], 'birth', labels)
add_keys(['autonomy_pressure', 'autonomy_express', 'autonomy_decide',
'autonomy_told', 'autonomy_myself', 'autonomy_choice'],
- 'autonomy')
+ 'autonomy', labels)
add_keys(['optimism_best', 'optimism_good', 'optimism_work'],
- 'optimism_other')
+ 'optimism_other', labels)
add_keys(['wellbeing_optimistic', 'wellbeing_useful', 'wellbeing_relaxed',
'wellbeing_problems', 'wellbeing_thinking', 'wellbeing_close',
- 'wellbeing_mind'], 'wellbeing')
+ 'wellbeing_mind'], 'wellbeing', labels)
add_keys(['esteem_satisfied', 'esteem_qualities', 'esteem_well',
- 'esteem_value', 'esteem_good'], 'esteem')
+ 'esteem_value', 'esteem_good'], 'esteem', labels)
add_keys(['stress_control', 'stress_overcome', 'stress_confident',
- 'stress_way'], 'stress')
+ 'stress_way'], 'stress', labels)
add_keys(['negative_lonely', 'negative_unhappy', 'negative_like',
'negative_cry', 'negative_school', 'negative_worry',
'negative_sleep', 'negative_wake', 'negative_shy',
- 'negative_scared'], 'negative')
- add_keys(['support_ways', 'support_look'], 'support')
+ 'negative_scared'], 'negative', labels)
+ add_keys(['support_ways', 'support_look'], 'support', labels)
add_keys(['places_barriers___1', 'places_barriers___2',
'places_barriers___3', 'places_barriers___4',
'places_barriers___5', 'places_barriers___6',
'places_barriers___7', 'places_barriers___8',
- 'places_barriers___9'], 'places_barriers')
+ 'places_barriers___9'], 'places_barriers', labels)
add_keys(['staff_interest', 'staff_believe', 'staff_best', 'staff_listen',
'home_interest', 'home_believe', 'home_best', 'home_listen'],
- 'relationships')
- add_keys(['staff_talk', 'home_talk', 'peer_talk'], 'talk')
+ 'relationships', labels)
+ add_keys(['staff_talk', 'home_talk', 'peer_talk'], 'talk', labels)
add_keys(['staff_talk_listen', 'home_talk_listen', 'peer_talk_listen'],
- 'talk_listen')
+ 'talk_listen', labels)
add_keys(['staff_talk_helpful', 'home_talk_helpful', 'peer_talk_helpful'],
- 'talk_helpful')
- add_keys(['staff_talk_if', 'home_talk_if', 'peer_talk_if'], 'talk_if')
+ 'talk_helpful', labels)
+ add_keys(['staff_talk_if', 'home_talk_if', 'peer_talk_if'],
+ 'talk_if', labels)
add_keys(['accept_staff', 'accept_home', 'accept_local',
- 'accept_peer'], 'accept')
+ 'accept_peer'], 'accept', labels)
add_keys(['local_support', 'local_trust', 'local_neighbours',
- 'local_places'], 'local_other')
+ 'local_places'], 'local_other', labels)
add_keys(['discrim_race', 'discrim_gender', 'discrim_orientation',
- 'discrim_disability', 'discrim_faith'], 'discrim')
+ 'discrim_disability', 'discrim_faith'], 'discrim', labels)
add_keys(['social_along', 'social_time', 'social_support', 'social_hard'],
- 'social')
- add_keys(['bully_physical', 'bully_other', 'bully_cyber'], 'bully')
+ 'social', labels)
+ add_keys(['bully_physical', 'bully_other', 'bully_cyber'], 'bully', labels)
+
+ return labels
+
+
+def create_symbol_response_label_dict():
+ '''
+ Creates dictionary with labels for each response in each question in the
+ symbol #BeeWell survey and council data.
+
+ Returns
+ -------
+ labels : dictionary
+ Dictionary where key is a topic name or group, and then key is another
+ dictionary where key is numeric answer and value is the label.
+ '''
+ # Define the labels to use for different columns
+ labels = {
+ 'symbol': {
+ 1: 'Happy',
+ 2: 'Ok',
+ 3: 'Sad'
+ },
+ 'gender': {
+ 0: 'Male',
+ 1: 'Female'
+ },
+ 'year_group': {
+ 7: 'Year 7',
+ 8: 'Year 8',
+ 9: 'Year 9',
+ 10: 'Year 10',
+ 11: 'Year 11'
+ },
+ 'fsm': {
+ 0: 'Non-FSM',
+ 1: 'FSM'
+ },
+ 'sen': {
+ 0: 'Non-SEN',
+ 1: 'SEN'
+ },
+ 'ethnicity': {
+ 1: 'Ethnic minority',
+ 2: 'White British'
+ },
+ 'english_additional': {
+ 0: 'No',
+ 1: 'Yes'
+ },
+ 'school': {
+ 1: 'School A',
+ 2: 'School B'
+ }
+ }
+
+ # Add values for the keys below, so each key has the same set of values
+ # (Rather than repeatedly defining them all above)
+ add_keys(['symbol_family', 'symbol_home', 'symbol_friends',
+ 'symbol_choice', 'symbol_things', 'symbol_health',
+ 'symbol_future', 'symbol_school', 'symbol_free',
+ 'symbol_life'], 'symbol', labels)
return labels
diff --git a/kailo_beewell_dashboard/reuse_text.py b/kailo_beewell_dashboard/reuse_text.py
index 7565d9e..fafd160 100644
--- a/kailo_beewell_dashboard/reuse_text.py
+++ b/kailo_beewell_dashboard/reuse_text.py
@@ -1,20 +1,45 @@
'''
-Large sections of text that are re-used between the dashboard and PDF report,
-but don't fit into other .py files (for example, if used on different pages -
-so on About page for dashboard, and then in Introduction for PDF report)
+Dictionary with large sections of descriptive text - mainly text that is from
+the About page (some of which goes on PDF report introduction) - but also
+includes caveat about comparisons (from PDF report introduction and from
+various pages of dashboard like summary and explore results). These are also
+text that are reused between different versions of the dashboard (e.g. standard
+and symbol).
'''
+reuse_text = {
-def text_how_use():
- '''
- Generate a few paragraphs on how to use this report
+ # Introduction for the About page
+ 'about_intro': '''
+This page has lots of helpful information about the research projects (Kailo
+and #BeeWell), as well as advice on using and accessing this dashboard, and
+some background information around young people's wellbeing.''',
- Returns
- -------
- text : string
- Markdown-formatted string
- '''
- text = '''
+ # Description of the Kailo project
+ 'kailo': '''
+Our aim is to help local communities, young people and public service
+partnerships better understand and address the root causes (and wider
+determinants) of young people's mental health.
+
+We're made up of leading academics, designers and practitioners, dedicated to
+working alongside communities in specific localities. Together, we will test
+and co-design evidence-based responses (in a 'framework') to these root causes,
+over the next four years.
+
+Our model is formed of three key stages:
+* **Early Discovery** - here, we build strong and trusted relationships with
+local partners with an aim to understand what matters locally thus, forming
+communities around youth- and community-centred priorities
+* **Deeper Discovery and Codesign** - this stage, which we're in currently,
+sees us codesign systemic responses to social determinants.
+* **Prototyping, Implementation and Testing** - this is where the learning is
+applied, integrating the codesigned responses with the local system,
+prototyping them and making iterative refinements along the way.
+
+To find out more about Kailo, check out our site: https://kailo.community/''',
+
+ # How to use the results from this report
+ 'how_to_use_results': '''
These data can provide a useful starting point for discussions about the needs
of your school population and priority areas for development and improvement.
It can also be useful in considering areas of strengths and/or helping pupils
@@ -32,20 +57,70 @@ def text_how_use():
strengths or difficulties the #BeeWell survey may highlight. They suggested
involving a range of students (not just those involved in school councils) in
planning how to raise awareness about wellbeing and to support the needs of
-young people.'''
- return text
+young people.''',
+
+ # Viewing dashboard on different devices
+ 'view_devices': '''
+Yes - this dashboard will resize so you should be able to access it on a range
+of devices (computer/laptop, tablet, phone, etc).''',
+
+ # Will there be support for interpreting and actioning on results?
+ 'dashboard_support': '''
+Yes - Child Outcomes Research Consortium (CORC) has been funded to provide
+seminars and 1:1 support to schools in Northern Devon, to help you understand
+how to navigate the dashboard, interpret results, and suggest feedback in the
+development of action plans. You should receive information about this via
+email, but if you have not or have any questions, please contact us at
+kailobeewell@dartington.org.uk.''',
+ # Context of what already know about YP wellbeing
+ 'wellbeing_context': '''
+* The peak age of onset of mental health difficulties is 14.5 years.[1]
+
+* Mental health and wellbeing in adolescence predicts adult health, labour
+market and other important outcomes.[2]
+* The wellbeing of adolescents has decreased in the last two decades, while the
+prevalence of mental health difficulties among them has increased.[3,4]
+
+* A recent international study ranked the UKās young people fourth from bottom
+across nearly 80 countries in terms of life satisfaction.[5,6]
+* Young peopleās mental health and wellbeing can be influenced by multiple
+drivers, including their health and routines, hobbies and entertainment,
+relationships, school, environment and society, and how they feel about their
+future.[7]
-def text_caution_comparing():
- '''
- Generate a few paragraphs to caution around comparisons
+
+References:
+[1] Solmi, M. et al (2021). Age at onset of mental disorders worldwide:
+large-scale meta-analysis of 192 epidemiological studies. Molecular Psychiatry,
+Online First. Available at: https://www.nature.com/articles/s41380-021-01161-7
+
+[2] Goodman A, Joshi H, Nasim B, Tyler C (2015). Social and emotional skills in
+childhood and their long-term effects on adult life. London: EIF. Available at:
+https://www.eif.org.uk/report/social-and-emotional-skills-in-childhood-and-thei
+r-long-term-effects-on-adult-life.
+[3] Childrenās Society (2021). The Good Childhood Report 2021. London:
+Children's Society. Available at: https://www.childrenssociety.org.uk/informati
+on/professionals/resources/good-childhood-report-2021
+[4] NHS Digital (2021). Mental health of children and young people in England,
+2021 ā wave 2 follow up to the 2017 survey. London: NHS Digital. Available at:
+https://digital.nhs.uk/data-and-information/publications/statistical/mental-hea
+lth-of-children-and-young-people-in-england/2021-follow-up-to-the-2017-survey
+
+[5] Office for Economic Cooperation and Development (2019). Programme for
+International Student Assessment (PISA) results. Paris: OECD. Available at:
+https://www.oecd.org/publications/pisa-2018-results-volume-iii-acd78851-en.htm
+
+[6] Marquez J, Long, E (2021). A Global Decline in Adolescents' Subjective
+Well-Being: a Comparative Study Exploring Patterns of Change in the Life
+Satisfaction of 15-Year-Old Students in 46 Countries. Child Ind Res 14,
+1251ā1292 (2021). Available at: https://doi.org/10.1007/s12187-020-09788-8
+[7] #BeeWell Programme Team (2021). #BeeWell survey. Manchester: University of
+Manchester. Available at: https://gmbeewell.org/wp-content/uploads/2021/09/BeeW
+ell-Questionnaires-Booklet.pdf
''',
- Returns
- -------
- text : string
- Markdown-formatted string
- '''
- text = '''
+ # Caution around making comparisons between schools
+ 'caution_comparing': '''
Always be mindful when making comparisons between different schools. There are
a number of factors that could explain differences in scores (whether you are
above average, average, or below average). These include:
@@ -64,12 +139,4 @@ def text_caution_comparing():
include any reflection of results from pupils who did not complete some or all
of the questions for that topic.
'''
- return text
-
-# Draft phrasing for benchmarking (not currently included in dashboards):
-# When comparing to the Greater Manchester data, be aware that (i) there
-# are likely to be greater differences in population characteristics
-# between Northern Devon and Greater Manchester than between different
-# areas in Northern Devon, and (ii) the Greater Manchester data were
-# collected in Autumn Term 2021 while the Havering data was collected in
-# Summer Term 2023.
+}
diff --git a/kailo_beewell_dashboard/static_report.py b/kailo_beewell_dashboard/static_report.py
index 8c526a2..65855f8 100644
--- a/kailo_beewell_dashboard/static_report.py
+++ b/kailo_beewell_dashboard/static_report.py
@@ -10,12 +10,100 @@
from .explore_results import (
write_page_title,
create_topic_dict,
- create_explore_topic_page)
+ create_explore_topic_page,
+ get_chosen_result,
+ create_bar_charts)
from .who_took_part import (
create_demographic_page_intro,
demographic_headers,
demographic_plots)
-from .reuse_text import text_how_use, text_caution_comparing
+from .reuse_text import reuse_text
+
+
+def logo_html():
+ '''
+ Generates HTML string to create logo as displayed on cover page of reports.
+
+ Returns
+ -------
+ img_tag : string
+ HTML to generate the logo
+ '''
+ # Encode image
+ data_uri = base64.b64encode(open('images/kailo_beewell_logo_padded.png',
+ 'rb').read()).decode('utf-8')
+ # Insert into HTML image tag
+ img_tag = f'''
+'''
+ return img_tag
+
+
+def illustration_html():
+ '''
+ Generates DIV element containing illustration as displayed on cover page of
+ reports.
+
+ Returns
+ -------
+ illustration : string
+ HTML to generate div containing the illustration
+ '''
+ # Encode image
+ data_uri = base64.b64encode(open('images/home_image_3_transparent.png',
+ 'rb').read()).decode('utf-8')
+ # Insert into HTML image tag
+ img_tag = f'''
+'''
+ # Insert into div
+ illustration = f'''
+
+ {img_tag}
+
'''
+ return illustration
+
+
+def structure_report(pdf_title, content):
+ '''
+ Inserts the provided HTML into the structure of the report - PDF title,
+ importing and reading the CSS style, and inserting the content of report
+
+ Parameters
+ ----------
+ pdf_title : string
+ Title for the pdf file
+ content : string
+ HTML content of the report
+
+ Returns
+ -------
+ html_content : string
+ HTML to produce the styled report
+ '''
+ # Remove the final temporary image file
+ if os.path.exists('report/temp_image.png'):
+ os.remove('report/temp_image.png')
+
+ # Import the CSS stylesheet
+ with open('css/static_report_style.css') as css:
+ css_style = css.read()
+
+ html_content = f'''
+
+
+
+ {pdf_title}
+
+
+
+ {''.join(content)}
+
+
+'''
+ return html_content
def create_static_report(chosen_school, chosen_group, df_scores, df_prop,
@@ -62,13 +150,8 @@ def create_static_report(chosen_school, chosen_group, df_scores, df_prop,
# Title page #
##############
- # Logo - convert to HTML, then add to the content for the report
- data_uri = base64.b64encode(open('images/kailo_beewell_logo_padded.png',
- 'rb').read()).decode('utf-8')
- img_tag = f'''
-'''
- content.append(img_tag)
+ # Add logo
+ content.append(logo_html())
# Get group name with only first character modified to lower case
group_lower_first = chosen_group[0].lower() + chosen_group[1:]
@@ -88,21 +171,11 @@ def create_static_report(chosen_school, chosen_group, df_scores, df_prop,
(d) by year group.
This report contains the results {group_lower_first} for
{chosen_school}.
-
-'''
+'''
content.append(title_page)
- # Illustration - convert to HTML, then add to the content for the report
- data_uri = base64.b64encode(open('images/home_image_3_transparent.png',
- 'rb').read()).decode('utf-8')
- img_tag = f'''
-'''
- illustration = f'''
-
- {img_tag}
-
'''
- content.append(illustration)
+ # Add illustration
+ content.append(illustration_html())
################
# Introduction #
@@ -114,11 +187,11 @@ def create_static_report(chosen_school, chosen_group, df_scores, df_prop,
# Using the report (duplicate text with About.py)
content.append('How to use this report
')
- content.append(markdown(text_how_use()))
+ content.append(markdown(reuse_text['how_to_use_results']))
# Comparison warning (duplicate text with Explore results.py)
content.append('Comparing between schools
')
- content.append(markdown(text_caution_comparing()))
+ content.append(markdown(reuse_text['caution_comparing']))
#####################
# Table of contents #
@@ -174,7 +247,7 @@ def create_static_report(chosen_school, chosen_group, df_scores, df_prop,
# Explore results section #
###########################
- # Craete cover page with title and introduction
+ # Create cover page with title and introduction
content.append(write_page_title(output='pdf'))
# Create pages for all of the topics
@@ -200,27 +273,150 @@ def create_static_report(chosen_school, chosen_group, df_scores, df_prop,
# Create HTML report #
######################
- # Remove the final temporary image file
- if os.path.exists('report/temp_image.png'):
- os.remove('report/temp_image.png')
+ html_content = structure_report(pdf_title, content)
- # Import the CSS stylesheet
- with open('css/static_report_style.css') as css:
- css_style = css.read()
+ return html_content
- html_content = f'''
-
-
-
- {pdf_title}
-
-
-
- {''.join(content)}
-
-
-'''
+
+def create_static_symbol_report(
+ chosen_school, df_prop, counts, dem_prop, pdf_title):
+ '''
+ Generate a static symbol survey PDF report for the chosen school and group,
+ with all the key information and figures from the dashboard
+
+ Parameters
+ ----------
+ chosen_school : string
+ Name of the chosen school
+ df_prop : dataframe
+ Dataframe with proportion of each response to each survey question
+ counts : dataframe
+ Dataframe with the counts of pupils at each school
+ dem_prop : dataframe
+ Dataframe with proportion of each reponse to the demographic questions
+ pdf_title : string
+ Title for the PDF file
+ '''
+ ##########
+ # Set-up #
+ ##########
+ # Create empty list to fill with HTML content for PDF report
+ content = []
+
+ # Create dictionary with groups and labels to use in table of contents
+ survey_groups = {'all': 'For all pupils',
+ 'year': 'By year group',
+ 'gender': 'By gender',
+ 'fsm': 'By FSM'}
+
+ # Get school size
+ school_size = get_school_size(counts, chosen_school, 'symbol')
+
+ ##############
+ # Title page #
+ ##############
+
+ # Add logo
+ content.append(logo_html())
+
+ # Title and introduction
+ title_page = f'''
+
+
The #BeeWell Survey
+
Thank you for taking
+ part in the #BeeWell survey delivered by Kailo.
+
The results from pupils at your school can be explored using the
+ interactive dashboard at
+ https://synthetic-beewell-kailo-standard-school-dashboard.streamlit.app/.
+ This report has been downloaded from that dashboard.
+ This report contains the results for {chosen_school}.
+
'''
+ content.append(title_page)
+
+ # Add illustration
+ content.append(illustration_html())
+
+ ################
+ # Introduction #
+ ################
+
+ # Heading
+ content.append('''
+ Introduction
''')
+
+ # Using the report (duplicate text with About.py)
+ content.append('How to use this report
')
+ content.append(markdown(reuse_text['caution_comparing']))
+
+ #####################
+ # Table of contents #
+ #####################
+
+ # Get all of the explore results pages as lines for the table of contents
+ explore_results_pages = []
+ for key, value in survey_groups.items():
+ line = f'''{value}'''
+ explore_results_pages.append(line)
+
+ # Get the demographic headers as lines for the table of contents
+
+ content.append(f'''
+
+
Table of Contents
+
+ - Explore results - Explore how
+ your pupils responded to each survey question
+
{''.join(explore_results_pages)}
+
+
+ - Who took part - See the
+ characteristics of the pupils who took part in the survey
+
+
+
+''')
+
+ ###########################
+ # Explore results section #
+ ###########################
+
+ # Create cover page with title and introduction
+ content.append(write_page_title(output='pdf', survey_type='symbol'))
+
+ # Create pages with plots for each measure
+ chosen_variable = 'symbol'
+ df_prop['group'] = chosen_variable
+ for key, value in survey_groups.items():
+ # Add title for that group
+ content.append(f'''
+Explore
+results {value[0].lower() + value[1:]}
''')
+ # Get results for that school and group
+ chosen_result = get_chosen_result(
+ chosen_variable, chosen_group=value, df=df_prop,
+ school=chosen_school, survey_type='symbol')
+ # Add bar charts to the HTML
+ content = create_bar_charts(
+ chosen_variable, chosen_result, output='pdf', content=content)
+
+ #########################
+ # Who took part section #
+ #########################
+
+ # Create cover page with title and introduction
+ content.append(create_demographic_page_intro(school_size, 'pdf'))
+
+ # Create pages with plots for each measure
+ dem_prop['plot_group'] = dem_prop['measure']
+ content = demographic_plots(
+ dem_prop=dem_prop, chosen_school=chosen_school,
+ chosen_group='For your school', output='pdf', content=content,
+ survey_type='symbol')
+
+ ######################
+ # Create HTML report #
+ ######################
+
+ html_content = structure_report(pdf_title, content)
return html_content
diff --git a/kailo_beewell_dashboard/who_took_part.py b/kailo_beewell_dashboard/who_took_part.py
index ac4e642..a3eaf59 100644
--- a/kailo_beewell_dashboard/who_took_part.py
+++ b/kailo_beewell_dashboard/who_took_part.py
@@ -53,31 +53,45 @@ def create_demographic_page_intro(school_size, output='streamlit'):
return html_string
-def demographic_headers():
+def demographic_headers(survey_type='standard'):
'''
Creates dictionary of headers for the demographic section
+ Parameters
+ ----------
+ survey_type : string
+ Specifies whether this is for standard or symbol survey dashboard
+
Returns
-------
header_dict : dictionary
Dictionary where key is a variable name, and value is the header
'''
- header_dict = {
- 'year_group': 'Year group',
- 'fsm': 'Eligible for free school meals (FSM)',
- 'gender': 'Gender and transgender',
- 'sexual_orientation': 'Sexual orientation',
- 'care_experience': 'Care experience',
- 'young_carer': 'Young carers',
- 'neuro': 'Special educational needs and neurodivergence',
- 'ethnicity': 'Ethnicity',
- 'english_additional': 'English as an additional language',
- 'birth': 'Background'}
+ if survey_type == 'standard':
+ header_dict = {
+ 'year_group': 'Year group',
+ 'fsm': 'Eligible for free school meals (FSM)',
+ 'gender': 'Gender and transgender',
+ 'sexual_orientation': 'Sexual orientation',
+ 'care_experience': 'Care experience',
+ 'young_carer': 'Young carers',
+ 'neuro': 'Special educational needs and neurodivergence',
+ 'ethnicity': 'Ethnicity',
+ 'english_additional': 'English as an additional language',
+ 'birth': 'Background'}
+ elif survey_type == 'symbol':
+ header_dict = {
+ 'gender': 'Gender',
+ 'year_group': 'Year group',
+ 'fsm': 'Eligible for free school meals (FSM)',
+ 'ethnicity': 'Ethnicity',
+ 'english_additional': 'English as an additional language'}
return header_dict
-def demographic_plots(dem_prop, chosen_school, chosen_group,
- output='streamlit', content=None):
+def demographic_plots(
+ dem_prop, chosen_school, chosen_group, output='streamlit',
+ content=None, survey_type='standard'):
'''
Creates the plots for the Who Took Part page/section, with the relevant
headers and descriptions, for the streamlit dashboard or PDF report.
@@ -96,6 +110,8 @@ def demographic_plots(dem_prop, chosen_school, chosen_group,
Specifies whether to write for 'streamlit' (default) or 'pdf'.
content : list
Optional input used when output=='pdf', contains HTML for report.
+ survey_type : string
+ Specifies whether this is for standard or symbol survey dashboard.
Returns
-------
@@ -113,25 +129,33 @@ def demographic_plots(dem_prop, chosen_school, chosen_group,
chosen_result = extract_nested_results(
chosen=chosen, group_lab='school_group_lab', plot_group=True)
- # Import descriptions for the charts
- response_descrip = create_response_description()
-
- # Import headers
- dem_header_dict = demographic_headers()
-
- # Loop through each of the groups of plots in dem_header_dict
+ # Generate titles and descriptions for the standard survey, and list of
+ # header sections
+ if survey_type == 'standard':
+ # Import descriptions for the charts
+ response_descrip = create_response_description()
+ # Import headers
+ dem_header_dict = demographic_headers(survey_type)
+ header_list = dem_header_dict.keys()
+ # We don't want section titles and descriptions for the symbol survey
+ # so just create list to loop through based on measure names
+ elif survey_type == 'symbol':
+ header_list = chosen['measure']
+
+ # Loop through each of the groups of plots
# This plots measures in loops, basing printed text on the measure names
# and basing the titles of groups on the group names (which differs to the
# survey responses page, which bases printed text on group names)
- for plot_group in dem_header_dict.keys():
+ for plot_group in header_list:
- # Add the title for that group
- if output == 'streamlit':
- st.header(dem_header_dict[plot_group])
- elif output == 'pdf':
- content.append(f'''
- {dem_header_dict[plot_group]}
''')
+ # Add the title for that group for standard survey
+ if survey_type == 'standard':
+ if output == 'streamlit':
+ st.header(dem_header_dict[plot_group])
+ elif output == 'pdf':
+ content.append(f'''
+ {dem_header_dict[plot_group]}
''')
# Find the measures in that group
measures = chosen_result.loc[
@@ -145,26 +169,27 @@ def demographic_plots(dem_prop, chosen_school, chosen_group,
for measure in measures:
i += 1
- # Add descriptive text if there is any
- if measure in response_descrip.keys():
- if output == 'streamlit':
- st.markdown(response_descrip[measure])
- elif output == 'pdf':
- if i > 0:
- content.append(f'''
-{markdown(response_descrip[measure])}
-
''')
- else:
- content.append(f'''
-{markdown(response_descrip[measure])}
''')
+ # Add descriptive text if there is any for standard survey
+ if survey_type == 'standard':
+ if measure in response_descrip.keys():
+ if output == 'streamlit':
+ st.markdown(response_descrip[measure])
+ elif output == 'pdf':
+ if i > 0:
+ content.append(f'''
+ {markdown(response_descrip[measure])}
+
''')
+ else:
+ content.append(f'''
+ {markdown(response_descrip[measure])}
''')
# Filter data for that measure and produce plot
to_plot = chosen_result[chosen_result['measure'] == measure]
if output == 'streamlit':
- survey_responses(to_plot)
+ survey_responses(to_plot, page='demographic')
elif output == 'pdf':
- content = survey_responses(
- to_plot, font_size=14, output='pdf', content=content)
+ content = survey_responses(to_plot, font_size=14, output='pdf',
+ content=content, page='demographic')
if output == 'pdf':
return content
diff --git a/requirements.txt b/requirements.txt
index df607f3..a4cb280 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,5 +1,5 @@
# For package installation
-pip==23.3.2
+pip==24.0
# For data processing
numpy==1.26.0
@@ -30,7 +30,13 @@ django==4.2.9
ipykernel==6.29.0
# To create a streamlit dashboard
-streamlit>=1.31.0
+streamlit>=1.31.1
# To upload package to PYPI
-twine==5.0.0
\ No newline at end of file
+twine==5.0.0
+
+# To produce readthedocs
+sphinx==7.2.6
+sphinx-rtd-theme==2.0.0
+myst-parser==2.0.0
+sphinx-autoapi==3.0.0
\ No newline at end of file