Skip to content

Releases: pythonlessons/mltu

1.1.4

04 Oct 07:14
Compare
Choose a tag to compare

[1.1.4] - 2022-09-29

Changed

  • Improoved mltu.torch.dataProvider.DataProvider to hangle multiprocessing when it doesn't work to switch to multithreading

1.1.3

29 Sep 13:52
Compare
Choose a tag to compare

[1.1.3] - 2022-09-29

Changed

  • Removed Librosa library dependency in requirements, now it is optional and required only with modules that use librosa

Added

  • Created Tutorials.05_sound_to_text.train_no_limit.py that demonstrates how to train audio recognition model with mltu without audio length limit

1.1.1

26 Sep 14:26
Compare
Choose a tag to compare

[1.1.1] - 2022-09-26

Changed

  • Included self._executor as generator in mltu.dataProvider.DataProvider object, to enable functionality to modify batch preprocessing without changing original code
  • Introduced changes in mltu.torch.dataProvider.py to handle data in multiprocessing and multithreading modes, for faster preprocessing while torch models
  • Modified mltu.transformers.AudioPadding object, to work with batches of raw audio data

Added

  • Created tutorial 10_wav2vec2_torch (Audio to Text model) that shows how to train wav2vec2 model with mltu

1.1.0

29 Aug 10:59
Compare
Choose a tag to compare

[1.1.0] - 2022-08-28

Changed

  • Changed mltu.transformers.SpectrogramPadding object, to pad spectrogram end with zeros instead of start

Added

  • Created Tutorials/09_translation_transformer tutorial, that shows how to train translation transformer model
  • Created mltu.tensorflow.tokenizers module, that contains CustomTokenizer for text data
  • Created mltu.tensorflow.transformer.attention module, that contains BaseAttention, CrossAttention, GlobalSelfAttention and CausalSelfAttention layers
  • Created mltu.tensorflow.transformer.layers module, that contains positional_encoding function, PositionalEmbedding, FeedForward, EncoderLayer, DecoderLayer, Encoder, Decoder layers and Transformer model
  • Created mltu.tensorflow.transformer.callbacks module, that contains EncDecSplitCallback callback, to split Transformer model into separate encoder and decoder models
  • Created mltu.tensorflow.transformer.utils module, that contains MaskedLoss loss and MaskedAccuracy metric, used for training Transformer models

1.0.15

15 Jul 08:35
Compare
Choose a tag to compare

[1.0.15] - 2022-07-15

Changed

  • Fixed bug in mltu.dataProvider.DataProvider to work with batch_postprocessors.

1.0.14

13 Jul 12:50
Compare
Choose a tag to compare

[1.0.14] - 2022-07-13

Changed

  • Included augment_annotation bool option to all mltu.augmentors to be able to choose whether to augment annotation or not
  • Changed mltu.augmentors.RandomRotate to have @staticmethod of rotate_image to be able to use it without creating object

Added

  • Added batch_postprocessor option to mltu.dataProvider.DataProvider to be able to postprocess batch after augmentation

1.0.12

08 Jun 12:38
Compare
Choose a tag to compare

1.0.12

1.0.11

07 Jun 09:22
Compare
Choose a tag to compare

Release 1.0.11 with some bug fixes

1.0.10

06 Jun 09:40
Compare
Choose a tag to compare

new release with minor changes

1.0.9

24 May 08:42
Compare
Choose a tag to compare

Spelling mistake fixes, single quotes to double quotes, introduced CVImage and PillowImage objects