-
Notifications
You must be signed in to change notification settings - Fork 0
/
U-Sleep-API-Script
155 lines (109 loc) · 7 KB
/
U-Sleep-API-Script
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
###U-Sleep
#YOU MUST HAVE THE BELOW PACKAGES, ALL CAN BE INSTALLED VIA PIP. YOU DO NOT NEED 'pdb' OR 'mne'.
import os
import os.path
import time
from usleep_api import USleepAPI
import logging
import pdb
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("api_example")
#You can install the api via pip with the following command: pip install usleep-api
#Same goes for Logging: pip install logging
#::::CODE EXPLANAION::::
#For this script to run, you must alter the inputdir and outputdir variables, these are strings that are the path of the input directory and
#the output directory respectively. More or less everything else will work automatically so long as you are connected to the QIMR storage drives (L-drive).
#Finally, the below variable 'token' which will have an assigned value of some hideously long sequence of characters, is the online-acquired U-Sleep
#API token, it is needed to utilize U-sleep from within a python script such as this. You will need a U-sleep account that has the ability to generate
#these tokens which can be created and authenticated here: https://sleep.ai.ku.dk/home From this URL, make an account and then make a request to have
#the ability to generate an API token, this might require emailling the creators of U-Sleep, whose details can be found in the 'Contact' section of
#the above website, found at the bottom of the page, next to the Privacy Policy and Terms of Service.
#When the above is correctly you should be able to press the 'Run Code' button and have it run correctly, if errors present, they usually require
#a case-by-case solution. In general, here are some of the most common:
# - Outdated or unauthentic API token --- solution is to request and use a new API token.
# - Invalid Channel/Channel Groups --- solutions vary, though usually this is a result of a frequency that isn't 128 Hz, so resample to 128. Can also occur when manually inputting channel groups, let the API figure that out automatically.
# - No such file or directory --- this can occur when the script tries to process a file twice and the MATLAB script deletes the EDF file before it can finish. Can also occur from a plain ol' typo in the input/outputdir varaibles.
#Other than these, many of the problems that can occur will unfortunately have to be solved manually by the user, as although most of this is fairly
#bullet proof, there are a lot of weird, moving parts to this script that can go wrong. If no errors present, it is recommended to have the MATLAB
#script running first and THEN this python script, though ther shouldn't be any major issues with running this first, it's just you may encounter
#the 3rd aforementioned common error.
#NOTE: every 12 hours the API token expires, thus, it is imperative that if the user is to have this script run for longer than 12 hours, that they
#check back on the code after around said 12 hours, as the script will throw an error and stop running. Fortunately, if this happens, one must only
#generate a new token and copy paste it into the code as before, then hit run and the code should continue along fine. It should not go out of sync, it
#only checks for valid EDF files in the input directory and then processes them and spits out the resulting .tsv files into the output directory.
#As the MATLAB script will not proceed until it finds said .tsv file, they should remain in sync.
listof = os.listdir("L:/Lab_JamesR/Paediatric_Sleep/diagnostic")
listnames = []
for j in listof:
if "DXA" in j:
listnames.append(j)
listnames = sorted(listnames)
#pdb.set_trace()
# Create an API object with API token stored in environment variable
token = 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJleHAiOjE2OTExNTA2NTAsImlhdCI6MTY5MTEwNzQ1MCwibmJmIjoxNjkxMTA3NDUwLCJpZGVudGl0eSI6IjFiNGQ2ZGI0ZTYyZCJ9.ELMEZ9ok04P9gghNG4k2UVFtwsh8gzyQ_Yu1X7sdsQA'
#pdb.set_trace()
#sorting out api and session
api = USleepAPI(api_token=token)
# ::::IMPORTANT - REQUIRES USER ATTENTION HERE ::::
# ::::INPUT DIRECTORY::::
# Below is the input file directory, that is to day, the folder that will contain the the edf files that will be uploaded and processed by U-sleep.
inputdir = "C:/Data&Scripts/Tempbin/"
# ::::OUTPUT DIRECTORY::::
# Below is the output file directory, i.e. the folder that will recieve the .tsv files (the hypnograms) this produces.
outputdir = "L:/Lab_JamesR/alexW/U-Sleep_Hypnograms/v2new/"
#
# ::::END USER INPUT SECTION::::
dispbool = False
#counter for quick predict = 548
counter = 1
tsvlist = os.listdir(outputdir)
#pdb.set_trace()
#::::Below is a command in the API to remove all current sessions, this is done to reduce potential overflows in sessions::::
#::::you will need to comment/remove the below line if you wish to have multiple instances of this script running in tandem::
api.delete_all_sessions()
#End section
#pdb.set_trace()
#:::MAIN:::#
while len(tsvlist) < 3800:
edflist = os.listdir(inputdir)
donetsvs = os.listdir(outputdir)
if edflist:
edftemp = edflist[0]
if not edflist[0].removesuffix(".edf") in donetsvs:
#pdb.set_trace()
edfname = edflist[0]
#pdb.set_trace()
if edflist.count(edfname) > 0:
print("\n\nPerforming U-Sleep analysis of " + edfname + " ---> (" + str(counter) + ")\n\n")
inname = inputdir + edfname
outname = outputdir + edfname.removesuffix(".edf") + ".tsv"
sessionv2 = api.new_session(session_name = edfname.removesuffix(".edf") + "_v2")
sessionv2.set_model("U-Sleep v2.0")
#pdb.set_trace()
sessionv2.upload_file(inname, anonymize_before_upload=False)
#pdb.set_trace()
sessionv2.predict(data_per_prediction=(128*30))
success = sessionv2.wait_for_completion()
if success:
dispbool = False
print("\n\nSucessfully produced hypnogram of " + edfname.removesuffix(".edf") + "\n\n")
#pdb.set_trace()
sessionv2.download_hypnogram(outname, file_type = "tsv")
counter += 1
sessionv2.delete_session()
print("\n\nAwating EDF for input\n\n")
time.sleep(1)
else:
logger.error("Prediction failed.")
time.sleep(1)
else:
if dispbool == False:
print("\n\nAwaiting for edf file input " + listnames[counter] + "\n\n")
dispbool = True
time.sleep(1)
#below is the api function for a quick prediction
#hypnogram, log = api.quick_predict(
# input_file_path= inname,
# output_file_path= outname,
# anonymize_before_upload=False
#)