Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOCS: Add documentation #34

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
qkay/__pycache__/
qkay/docs/build/
6 changes: 2 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,9 @@ To run qkay using Docker Compose, follow these steps:
Username: Admin
Password: abcd
6. Once you have logged in, go to the Admin panel and change your password to something more secure.
7. Once you have logged in, go to the Admin panel and add a dataset by clicking on the "Add Dataset" button. You will need to provide the following information:
Dataset Name: The name of the dataset you want to add.
Dataset Path: The path to the dataset on your computer relative to the /datasets/ folder mounted in the Docker image. For example, if the dataset is located at /data/ds1 on your computer and your .env file contains the variable DATASETS_PATH=/data/, you should enter /datasets/ds1/ as the dataset path. Note that the DATASETS_PATH variable in the .env file specifies the parent folder that contains all datasets, and the dataset path you enter in the Admin panel should be a subfolder of this parent folder, mounted as /datasets/ in the Docker image.
7. Once you have logged in, go to the Admin panel and add a dataset by clicking on the "Add Dataset" button. You will find the list of all datasets in the folder indicated in the .env file. Select the dataset you want to add.

# Contributing
We welcome contributions to Qkay. Please read the [contributing guide](https://github.com/nipreps/qkay/blob/docker-version/CONTRIBUTING.md) to get started.
We welcome contributions to Qkay. Please read the [contributing guide](https://www.nipreps.org/community/CONTRIBUTING/) to get started.
# License
Qkay is released under the Apache 2.0 License.
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
35 changes: 35 additions & 0 deletions docs/make.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
@ECHO OFF

pushd %~dp0

REM Command file for Sphinx documentation

if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build

%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)

if "%1" == "" goto help

%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end

:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%

:end
popd
55 changes: 55 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html

# -- Path setup --------------------------------------------------------------

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))


# -- Project information -----------------------------------------------------

project = 'qkay'
copyright = '2024, esavary'
esavary marked this conversation as resolved.
Show resolved Hide resolved
author = 'esavary'

# The full version, including alpha/beta/rc tags
release = 'v.0.0.0'


# -- General configuration ---------------------------------------------------

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
]

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []


# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
48 changes: 48 additions & 0 deletions docs/source/database.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
Accessing MongoDB Database
==========================

Introduction
------------

This document provides instructions on how to access the MongoDB database used in qkay. There are two main methods outlined below: accessing it using MongoDB Compass and accessing it in Python code.

Accessing with MongoDB Compass
------------------------------

To access the MongoDB database used in qkay using MongoDB Compass, follow these steps:

1. Open MongoDB Compass on your local machine. If needed, you can find the instructions to install MongoDB Compass `here <https://www.mongodb.com/docs/compass/current/install/>`_.

2. Click on the "New Connection" button.

3. In the "New Connection" dialog, enter the connection URI. If you are using Docker to run qkay, the URI should typically be `mongodb://localhost:27017`. Ensure that the database is running when you try to access it.

4. Click on the "Connect" button.

5. Once connected, you will see a list of databases. Look for the database named `data_base_qkay` and click on it to access its collections and documents.

Accessing in Python script
--------------------------

To access the MongoDB database used in qkay in a Python script, you can use the pymongo library. Make sure you have pymongo installed in your Python environment. Here's a sample Python code snippet to connect to the database:

.. code-block:: python

from pymongo import MongoClient

# Connect to MongoDB
client = MongoClient('mongodb://localhost:27017/')

# Access the database
db = client['data_base_qkay']

# Now you can work with the database, for example:
# Access the ratings collection
collection = db['ratings']

# Query all ratings
ratings = collection.find({})

# Iterate over ratings
for rating in ratings:
print(rating)
15 changes: 15 additions & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
.. qkay documentation master file, created by
sphinx-quickstart on Fri May 3 10:57:35 2024.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.

Welcome to qkay's documentation!
================================

Contents
--------
.. toctree::
:maxdepth: 3

usage
database
69 changes: 69 additions & 0 deletions docs/source/usage.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
How to Use
==========
Introduction
------------

Q’kay is a web service developed to deploy rigorous Quality Control (QC) protocols on large datasets, leveraging individual reports generated by tools like MRIQC and fMRIPrep. It follows a model-view-controller architecture, with a MongoDB database storing user data, datasets, and expert manual assessments ('ratings') for each evaluated image.

Q’kay offers a comprehensive suite of features, including:

- **Progress Tracking:** Q’kay tracks and stores the progression of raters, allowing for flexible task splitting and efficient workload management.
- **Docker Containerization:** Q’kay is easy to deploy and use across various environments.
- **Secure Login System:** Q’kay provides front-end views for administrators and raters, with administrators having exclusive access to assessment management, dataset uploads, and user administration.
- **Reproducible Protocols:** Q’kay offers specific feature options to anonymize, shuffle, and repeat visual reports in a reproducible manner.
- **Centralized Database:** All user information, datasets, progression data, and expert assessments are stored in the same MongoDB database, facilitating efficient data management and retrieval.

The graphical user interface, powered by the BootstrapJS library, is accessible via standard web browsers. The backend controller, implemented in Python using the Flask framework, provides a secure login system.

Running with Docker
-------------------

To run the Q'kay package using Docker Compose, you'll need to have Docker and Docker Compose installed on your machine. You can download and install them from the following links:

- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)

Before using Q'kay with Docker, you will need to set up the necessary environment variables by completing the `.env` file. In this file, you should provide the path to the Mongo database and the path to all datasets that you want to evaluate. If there is more than one dataset, the path should be the parent folder.

Run the containers with Docker Compose:

.. code-block:: bash

$ docker-compose up

After running the containers, you can access the application at `http://localhost`.

Get Started with Q'kay
----------------------

To begin using Q'kay, follow these steps:

1. Once the containers are up and running, open a web browser and navigate to https://localhost.

2. Log in to the application using the following credentials:
- **Username:** Admin
- **Password:** abcd

3. After logging in, it is recommended to go to the Admin panel and change your password to something more secure.

4. In the Admin panel, you can add your first dataset by clicking on the "Add Dataset" button. You will see the list all of datasets present in the folder dataset you indicated in the `.env` file. Select the dataset you want to add and click on the "Add" button.
The name of the dataset is inferred from the file dataset_description.json in the folder of the dataset if it exists. If it does not exist, the name of the dataset will be the name of the folder.

5. Add your first users
- In the Admin panel, click on the "Add User" button to add a new user.
- Provide the following information for the user:
- **Username:** The username for the new user.
- **Password:** The password for the new user.

They will be able to update their password after logging in.

6. Assign the dataset to the user
- In the Admin panel, click on the "Assign Dataset" button to assign a dataset to a user.
- Select the user from the dropdown list.
- Select the dataset from the dropdown list.
- Select the options such as randomize the order, repeat or anonymize the reports.
- Click on the "Assign" button.

7. The user can now log in and start rating the images in the dataset assigned to them.