You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An intelligent algorithm to decimate data without losing much information (signal shape / local extrema); would be very helpful towards rendering large datasets as long as it's faster than the performance gain 😆
Preliminary results show significant speed reduction using the LTTB algorithm. Is the improvement in preserving signal quality worth it? Maybe balance this by using it only when attempting to render large arrays, though this would also reduce the speed of the LTTB algorithm.
Maybe I could downsample all the signals beforehand using this approach then create new WFDB formatted signals with much lower total number of samples?
An intelligent algorithm to decimate data without losing much information (signal shape / local extrema); would be very helpful towards rendering large datasets as long as it's faster than the performance gain 😆
https://skemman.is/bitstream/1946/15343/3/SS_MSthesis.pdf
The text was updated successfully, but these errors were encountered: