Skip to content

Commit

Permalink
[DOCS][zeta.ops][CLEANUP]
Browse files Browse the repository at this point in the history
  • Loading branch information
Kye committed Dec 30, 2023
1 parent c798352 commit ddcdc19
Show file tree
Hide file tree
Showing 5 changed files with 3 additions and 141 deletions.
15 changes: 0 additions & 15 deletions docs/zeta/ops/img_order_of_axes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,11 @@ The `img_order_of_axes` function is a utility designed to reorder the axes of an

This documentation provides an in-depth understanding of the `img_order_of_axes` function, its architecture, and the rationale behind its design. We will cover multiple usage examples, detailing the parameters, expected inputs and outputs, along with additional tips and resources.

## Introduction

The `img_order_of_axes` function plays a crucial role in scenarios where a batch of images needs to be combined into a single image with individual images laid out horizontally. This function is particularly useful when there is a need to visualize multiple similar images side by side, such as comparing different stages of image processing or visualization of input-output pairs in machine learning tasks.

## Function Definition

### img_order_of_axes(x)

Rearranges the axes of an image tensor from batch-height-width-channel order to height-(batch * width)-channel order.

#### Parameters:
Expand All @@ -23,9 +20,6 @@ Rearranges the axes of an image tensor from batch-height-width-channel order to
#### Returns:
A rearranged tensor that combines the batch and width dimensions, resulting in a shape of (h, b * w, c).

## Functionality and Usage

The `img_order_of_axes` function relies on the 'rearrange' utility, which is commonly provided by libraries like `einops`. This function provides a simple, yet powerful operation that alters the shape and order of axes in a tensor without changing its data. For image tensors, it's often necessary to manipulate their structure to conform to visualization standards or input requirements of certain algorithms.

### Usage Example 1:

Expand All @@ -36,7 +30,6 @@ import torch
from einops import rearrange
from zeta.ops import img_order_of_axes

# Assuming torch is the backend used for tensors
# Create a dummy batch of images with shape (b, h, w, c)
batch_size, height, width, channels = 4, 100, 100, 3
dummy_images = torch.rand(batch_size, height, width, channels)
Expand Down Expand Up @@ -96,11 +89,3 @@ output = model(large_image.unsqueeze(0)) # Add batch dimension of 1 at the begi
- It's important to note that the `rearrange` function used within `img_order_of_axes` is not a PyTorch built-in function. It requires the `einops` library which offers more flexible operations for tensor manipulation.
- To install `einops`, use the package manager of your choice, e.g., `pip install einops` for Python's pip package manager.
- When visualizing the rearranged tensor, ensure that the visualization tool or library you choose can handle non-standard image shapes, as the resulting tensor will have a width that is a multiple of the original width.

## References and Resources

For more information on tensor manipulation and visualization, please refer to the following resources:

- [Einops Documentation](https://einops.rocks/)
- [PyTorch Tensors Documentation](https://pytorch.org/docs/stable/tensors.html)
- [Image Visualization Techniques](https://matplotlib.org/3.1.1/gallery/images_contours_and_fields/image_demo.html) (using Matplotlib)
11 changes: 2 additions & 9 deletions docs/zeta/ops/merge_small_dims.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,6 @@
# merge_small_dims


The `merge_small_dims` is a utility function within the fictional `zeta.ops` library, built to manipulate tensor dimensions in order to optimize computation. This document provides comprehensive information, examples, and guidelines for its usage. The following sections will cover the purpose, functionality, usage examples, and additional tips related to `merge_small_dims`.

## Overview and Introduction

The `zeta.ops` library provides utility operations for working with tensors. It is common for tensor-oriented computations to encounter scenarios where the shape of a tensor may include dimensions with smaller sizes that can be beneficially merged to optimize performance or conform to specific requirement constraints.

The `merge_small_dims` function specifically targets such use-cases. It allows reshaping of a tensor by merging its smaller dimensions (below a certain threshold) while ensuring that the overall element count of the tensor remains unchanged. This operation is particularly useful in developing deep learning models where tensor dimensions might need adjustments before passing through layers or operations.
allows reshaping of a tensor by merging its smaller dimensions (below a certain threshold) while ensuring that the overall element count of the tensor remains unchanged. This operation is particularly useful in developing deep learning models where tensor dimensions might need adjustments before passing through layers or operations.

## Class/Function Definition

Expand All @@ -34,7 +27,7 @@ When to use `merge_small_dims`:

```python
from typing import List
from zeta.ops import merge_small_dims # Assuming zeta.ops is the library path
from zeta.ops import merge_small_dims

# Original tensor shape
orig_shape = [2, 3, 1, 5, 1]
Expand Down
36 changes: 1 addition & 35 deletions docs/zeta/ops/mos.md
Original file line number Diff line number Diff line change
@@ -1,39 +1,10 @@
# `MixtureOfSoftmaxes` Documentation

The `MixtureOfSoftmaxes` module is an implementation of the Mixture of Softmaxes (MoS) as described by Yang et al. in 2017. This module enhances the expressiveness of the softmax function by combining multiple softmaxes. It is particularly useful for tasks where the relationship between input features and output classes is complex and can benefit from a combination of multiple softmax distributions.

## Table of Contents

- [Overview](#overview)
- [Installation](#installation)
- [Usage](#usage)
- [Initialization](#initialization)
- [Forward Pass](#forward-pass)
- [Examples](#examples)
- [Basic Example](#basic-example)
- [Complex Task](#complex-task)
- [Parameters](#parameters)
- [Return Value](#return-value)
- [Additional Information](#additional-information)
- [References](#references)

## Overview <a name="overview"></a>

The `MixtureOfSoftmaxes` module is designed to improve the modeling capabilities of the softmax function by allowing the combination of multiple softmax distributions. It takes an input tensor and computes a weighted sum of softmax outputs from different softmax layers. These weights are learned during training, enabling the model to adapt to the data's characteristics effectively.

The primary use case of the MoS module is in scenarios where a single softmax may not capture the complex relationships between input features and output classes. By combining multiple softmax distributions with learned mixture weights, the module provides a flexible approach to handle such situations.

## Installation <a name="installation"></a>

Before using the `MixtureOfSoftmaxes` module, ensure you have the required dependencies installed. You'll need:

- zetascale

You can install Zeta using pip:

```bash
pip install zetascale
```

Once you have the dependencies installed, you can import the module in your Python code.

Expand Down Expand Up @@ -139,10 +110,5 @@ The `forward` method of the `MixtureOfSoftmaxes` module returns two values:
## Additional Information <a name="additional-information"></a>

- The MoS module can be used in a variety of deep learning tasks, including classification, natural language processing, and more.
- It is important to fine-tune the number of mixtures and other hyperparameters based on the specific task and dataset.

## References <a name="references"></a>

- Yang, Z., Hu, Z., Salakhutdinov, R., and Berg-Kirkpatrick, T. (2017). Improved variational inference with inverse autoregressive flow. In Proceedings of the 34th International Conference on Machine Learning (ICML).

This documentation provides a comprehensive guide on using the `MixtureOfSoftmaxes` module. Feel free to explore its capabilities and adapt it to your specific machine learning tasks.
- It is important to fine-tune the number of mixtures and other hyperparameters based on the specific task and dataset.
81 changes: 0 additions & 81 deletions docs/zeta/ops/rearrange.md

This file was deleted.

1 change: 0 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,6 @@ nav:
- video_tensor_to_gift: "zeta/utils/video_tensor_to_gift.md"
- zeta.ops:
- img_compose_decompose: "zeta/ops/img_compose_decompose.md"
- rearrange: "zeta/ops/rearrange.md"
- img_transpose_2daxis: "zeta/ops/img_transpose_2daxis.md"
- img_transpose: "zeta/ops/img_transpose.md"
- img_order_of_axes: "zeta/ops/img_order_of_axes.md"
Expand Down

0 comments on commit ddcdc19

Please sign in to comment.