Releases: pythonlessons/mltu
Releases · pythonlessons/mltu
1.1.4
[1.1.4] - 2022-09-29
Changed
- Improoved
mltu.torch.dataProvider.DataProvider
to hanglemultiprocessing
when it doesn't work to switch tomultithreading
1.1.3
[1.1.3] - 2022-09-29
Changed
- Removed
Librosa
library dependency in requirements, now it is optional and required only with modules that use librosa
Added
- Created
Tutorials.05_sound_to_text.train_no_limit.py
that demonstrates how to train audio recognition model withmltu
without audio length limit
1.1.1
[1.1.1] - 2022-09-26
Changed
- Included
self._executor
as generator inmltu.dataProvider.DataProvider
object, to enable functionality to modify batch preprocessing without changing original code - Introduced changes in
mltu.torch.dataProvider.py
to handle data in multiprocessing and multithreading modes, for faster preprocessing while torch models - Modified
mltu.transformers.AudioPadding
object, to work with batches of raw audio data
Added
- Created tutorial
10_wav2vec2_torch
(Audio to Text model) that shows how to train wav2vec2 model with mltu
1.1.0
[1.1.0] - 2022-08-28
Changed
- Changed
mltu.transformers.SpectrogramPadding
object, to pad spectrogram end with zeros instead of start
Added
- Created
Tutorials/09_translation_transformer
tutorial, that shows how to train translation transformer model - Created
mltu.tensorflow.tokenizers
module, that containsCustomTokenizer
for text data - Created
mltu.tensorflow.transformer.attention
module, that containsBaseAttention
,CrossAttention
,GlobalSelfAttention
andCausalSelfAttention
layers - Created
mltu.tensorflow.transformer.layers
module, that containspositional_encoding
function,PositionalEmbedding
,FeedForward
,EncoderLayer
,DecoderLayer
,Encoder
,Decoder
layers andTransformer
model - Created
mltu.tensorflow.transformer.callbacks
module, that containsEncDecSplitCallback
callback, to split Transformer model into separate encoder and decoder models - Created
mltu.tensorflow.transformer.utils
module, that containsMaskedLoss
loss andMaskedAccuracy
metric, used for training Transformer models
1.0.15
[1.0.15] - 2022-07-15
Changed
- Fixed bug in
mltu.dataProvider.DataProvider
to work withbatch_postprocessors
.
1.0.14
[1.0.14] - 2022-07-13
Changed
- Included
augment_annotation
bool option to allmltu.augmentors
to be able to choose whether to augment annotation or not - Changed
mltu.augmentors.RandomRotate
to have@staticmethod
ofrotate_image
to be able to use it without creating object
Added
- Added
batch_postprocessor
option tomltu.dataProvider.DataProvider
to be able to postprocess batch after augmentation
1.0.12
1.0.12
1.0.11
Release 1.0.11 with some bug fixes
1.0.10
new release with minor changes
1.0.9
Spelling mistake fixes, single quotes to double quotes, introduced CVImage and PillowImage objects