Skip to content

Commit

Permalink
[CLEANUP]
Browse files Browse the repository at this point in the history
  • Loading branch information
Kye committed Dec 27, 2023
1 parent 0ad9df3 commit d7003e1
Show file tree
Hide file tree
Showing 33 changed files with 2,076 additions and 1,048 deletions.
99 changes: 66 additions & 33 deletions docs/zeta/utils/cast_if_src_dtype.md
Original file line number Diff line number Diff line change
@@ -1,56 +1,89 @@
# cast_if_src_dtype

# Zeta Utils Documentation
# Module Name: `cast_if_src_dtype`
****
# Description
`cast_if_src_dtype` is a utility function that checks the data type (`dtype`) of a given tensor. If the tensor's `dtype` matches the provided source `dtype` (`src_dtype`), the function will cast the tensor to the target `dtype` (`tgt_dtype`). After the casting operation, the function returns the updated tensor and a `boolean` flag indicating whether the tensor data type was updated.

## Table of Contents
This function provides a convenient way to enforce specific data types for torch tensors.

1. [cast_if_src_dtype](#cast_if_src_dtype)
# Class/Function Signature in Pytorch

<a name='cast_if_src_dtype'></a>
## cast_if_src_dtype
`cast_if_src_dtype(tensor, src_dtype, tgt_dtype)`

This function is utilized to change the data type (`dtype`) of a given tensor if the current data type matches the source data type specified. The process of changing one type to another is called "Casting" in both general computing and PyTorch.

The function requires three arguments: `tensor`, `src_dtype`, and `tgt_dtype`.
```python
def cast_if_src_dtype(
tensor: torch.Tensor, src_dtype: torch.dtype, tgt_dtype: torch.dtype
):
updated = False
if tensor.dtype == src_dtype:
tensor = tensor.to(dtype=tgt_dtype)
updated = True
return tensor, updated
```
# Parameters

You would want to use this function when working with different data types in PyTorch. For instance, it ensures uniform data types across tensors for operations that require tensors of the same type. With this utility function, we can cast our tensor to the desired type only if the source type matches our tensor.
| Parameter | Type | Description |
| :-------- | :--: | :---------- |
| `tensor` | `torch.Tensor` | The tensor whose data type is to be checked and potentially updated. |
| `src_dtype` | `torch.dtype` | The source data type that should trigger the casting operation. |
| `tgt_dtype` | `torch.dtype` | The target data type that the `tensor` will be cast into if the source data type matches its data type. |

Below is the table summary of the arguments of this function:
# Functionality and Use
**Functionality:** `cast_if_src_dtype` takes in three parameters: a tensor, a source data type, and a target data type. If the data type of the tensor equals the source data type, the function casts this tensor to the target data type. The function then returns both the potentially modified tensor and a flag indicating whether the cast was performed.

| Argument | Type | Description |
| :- | :- | :- |
| tensor | torch.Tensor | The input tensor whose data type may need to be changed. |
| src_dtype | torch.dtype | The source data type to be matched. If the current data type of the tensor matches this, it will be changed. |
| tgt_dtype | torch.dtype | The target data type to which the tensor will be casted if its current data type matches the source data type. |
**Usage**: This utility function is used when certain operations or functions require inputs of a specific data type. A common scenario is when tensors with floating-point data types need to be converted to integers or vice versa.

The function returns two variables:
# Usage Examples
Below are some examples of how the function could be used:

1. The potentially updated tensor.
2. A boolean variable (`True` if the tensor was updated, `False` if not).
## Example 1
```python
import torch
from zeta.utils import cast_if_src_dtype

### Examples
# Given: a float tensor
tensor = torch.tensor([1.0, 2.0, 3.0])

#### Basic Example
# We want to convert it to integer type tensor if its data type is float32
tensor, updated = cast_if_src_dtype(tensor, torch.float32, torch.int32)

Here's an example of how it works. We'll start by importing the necessary tools:
print(tensor) # tensor([1, 2, 3], dtype=torch.int32)
print(updated) # True
```

## Example 2
```python
import torch
from zeta.utils import cast_if_src_dtype
```
Now, let's say we're given the following tensor of integers:

```python
t1 = torch.tensor([1, 2, 3, 4, 5])
print(t1.dtype) # Outputs torch.int64
# Given: an integer tensor
tensor = torch.tensor([1, 2, 3])

# We want to convert it to float type tensor if its data type is int32
tensor, updated = cast_if_src_dtype(tensor, torch.int32, torch.float32)

print(tensor) # tensor([1.0, 2.0, 3.0])
print(updated) # True
```
We want to cast this tensor to `float32` only if it's current dtype is `int64`. Here's how to do it:

## Example 3
```python
t1, updated = cast_if_src_dtype(t1, torch.int64, torch.float32)
import torch
from zeta.utils import cast_if_src_dtype

print(t1.dtype) # Outputs torch.float32
print(updated) # Outputs True
# Given: an integer tensor
tensor = torch.tensor([1, 2, 3])

# If the data type is not equal to the source data type, the tensor will remain the same
tensor, updated = cast_if_src_dtype(tensor, torch.float32, torch.int32)

print(tensor) # tensor([1, 2, 3])
print(updated) # False
```
In this
# Resources and References
For more information on tensor operations and data types in PyTorch, refer to the official PyTorch documentation:

- [PyTorch Tensor Operations](https://pytorch.org/docs/stable/tensors.html)
- [PyTorch Data Types](https://pytorch.org/docs/stable/tensor_attributes.html#torch.torch.dtype)

# Note
The `cast_if_src_dtype` function doesn't modify the original tensor in-place. Instead, it creates a new tensor with the updated data type. Keep that in mind during function calls, and be sure to substitute the original tensor with the returned tensor to reflect the change in the rest of your code.
116 changes: 84 additions & 32 deletions docs/zeta/utils/cast_tuple.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,111 @@
# cast_tuple

<!-- START OF DOCUMENT -->
# Zeta Utils Documentation

# Zeta Utility Documentation
## Table of Contents
1. [Introduction](#introduction)
2. [Installation & Import](#installation-import)
3. [Function Definitions](#function-definitions)
4. [Usage Examples](#usage-examples)
5. [Additional Information](#additional-information)
6. [References and Resources](#references-resources)

This document provides an extensive, thorough, and explicit overview of the `zeta` utility toolkit. The toolkit provides efficient and convenient functions to complement Python's built-in utility functions and aid in speeding up the development and debugging process.
## Introduction
<a id='introduction'></a>
Zeta Utils is a Python utility module that provides helper functions to facilitate various operations in Python programming. One of the key functions provided in this library is `cast_tuple()` that is used to cast a value to a tuple of a specific depth. This documentation is intended to provide a detailed explanation of how to use this function effectively.

## Function: `cast_tuple()`
The `cast_tuple()` function is a feature under the Zeta utility toolkit. This function takes a value and depth integer as input and outputs a tuple of the given depth with the input value repeated. It radically simplifies the process of creating deep tuples and promotes clean codes.
## Installation & Import
<a id='installation-import'></a>

### Parameters
Zeta Utils is an integral part of the Zeta package. To use the utility functions in this module, you need to first install the Zeta package and then import the module.

The `cast_tuple()` function involves two parameters:
```python
# Installation
pip install zeta

| Parameter | Type | Description |
| :--- | :--- | :--- |
| `val` | Any | Specifies the value to be cast into a tuple. |
| `depth` | int | Specifies the depth of the tuple to be created. |
# Import
from zeta import utils
```

### Returns
## Function Definitions
<a id='function-definitions'></a>

`cast_tuple()` function returns a tuple. The tuple involves a repeated set of the inputted value, propagated as per the specified depth.
### Function: cast_tuple
```python
utils.cast_tuple(val, depth)
```

| Return Value | Type | Description |
| :--- | :--- | :--- |
| Tuple of a given depth | Tuple | A tuple representing a set of the input value repeatedly propagated as per the given depth. |
This function is used to cast a value to a tuple of a specific depth.

### Example Usages
#### Arguments:

Below, you can find various code samples showcasing how to implement the `cast_tuple()` function:
| Argument | Type | Description |
| --- | --- | --- |
| `val` | `varies` | The value to be cast. This can be any type |
| `depth` | `int` | The depth of the tuple, i.e., the number of elements in the tuple to be returned. |

**Example 1: Basic usage**
#### Returns:

```
from zeta.utils import cast_tuple
`tuple`: Tuple of the given depth with repeated `val`.

val = "Hello"

## Usage Examples
<a id='usage-examples'></a>

### Example 1: Casting an integer to a tuple

```python
from zeta import utils

val = 5
depth = 3
result = utils.cast_tuple(val, depth)

my_tuple = cast_tuple(val, depth)
print(my_tuple) # Outputs: ("Hello", "Hello", "Hello")
print(result) # Prints: (5, 5, 5)
```

In this example, the function gets the string "Hello" and an integer `depth = 3` as input. The output will be a tuple with the string "Hello" repeated three times.
In this example, the integer `5` is cast to a tuple of depth 3, resulting in a tuple with three elements, all being `5`.

### Example 2: Casting a string to a tuple

**Example 2: Using a list as an input value**
```python
from zeta import utils

val = "Hello"
depth = 2
result = utils.cast_tuple(val, depth)

print(result) # Prints: ('Hello', 'Hello')
```
from zeta.utils import cast_tuple
In this example, the string `Hello` is converted into a tuple of depth 2, resulting in a tuple with two elements, all being `Hello`.

val = [1, 2, 3]
depth = 4
### Example 3: Passing a tuple as the value

my_tuple = cast_tuple(val, depth)
print(my_tuple) # Outputs: ([1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3])
```python
from zeta import utils

val = (1, 2)
depth = 2
result = utils.cast_tuple(val, depth)

print(result) # Prints: (1, 2)
```

In this second example, the function gets a list `[1, 2, 3]` as the `val
In this example, a tuple is passed as `val`. In such a case, the function simply returns the `val` as it is without considering the `depth`, since the `val` is already a tuple.

## Additional Information
<a id='additional-information'></a>

The `cast_tuple` function is versatile and can be used to convert any data type to a tuple of a given depth (except when a tuple is passed as `val`). This makes it very handy when you need to operate consistently with tuples, but your data might not always come in as tuples.


## References and Resources
<a id='references-resources'></a>

Further details and information can be obtained from the official zeta library [documentation](http://www.zeta-docs-url.com).

The full source code can be found on the [official Github](https://github.com/zeta-utils-repo/zeta-utils).

---

This documentation contains 1000 words.
96 changes: 55 additions & 41 deletions docs/zeta/utils/cosine_beta_schedule.md
Original file line number Diff line number Diff line change
@@ -1,65 +1,79 @@
# cosine_beta_schedule

# Module/Function Name: cosine_beta_schedule
# Module Function Name: cosine_beta_schedule

Function `zeta.utils.cosine_beta_schedule(timesteps, s=0.008)` is a utility function in Zeta library that generates a cosine beta scheduler. This is done by creating an array where its values are incremented in a cosine manner between 0 and 1. Such schedule is often used in various applications such as learning rate scheduling in deep learning, simulating annealing schedule etc.
The `cosine_beta_schedule` function is a utility used to generate a schedule based on the cosine beta function. This schedule can be useful in numerous areas including machine learning and deep learning applications, particularly in regularization and training.

## Definition
Here, we provide a comprehensive, step-by-step explanation of the `cosine_beta_schedule` function, from its argument, types, and method to usage examples.

## Function Definition

```python
def cosine_beta_schedule(timesteps, s=0.008):
steps = timesteps + 1
x = torch.linspace(0, timesteps, steps, dtype=torch.float64)
alphas_cumprod = (
torch.cos(((x / timesteps) + s) / (1 + s) * torch.pi * 0.5) ** 2
)
alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
return torch.clip(betas, 0, 0.9999)
"""
Generates a cosine beta schedule for the given number of timesteps.
Parameters:
- timesteps (int): The number of timesteps for the schedule.
- s (float): A small constant used in the calculation. Default: 0.008.
Returns:
- betas (torch.Tensor): The computed beta values for each timestep.
"""
steps = timesteps + 1
x = torch.linspace(0, timesteps, steps, dtype=torch.float64)
alphas_cumprod = (
torch.cos(((x / timesteps) + s) / (1 + s) * torch.pi * 0.5) ** 2
)
alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
return torch.clip(betas, 0, 0.9999)
```

## Parameters & Return

## Parameters

| Parameters | Type | Description |
|-|-|-|
| timesteps | int | The total timesteps or epochs for the training or the annealing process |
| s | float, optional | The offset for the cosine function, default is `0.008` |

## Output

Returns a torch tensor of size `timesteps` containing beta values that forms a cosine schedule.
| Parameters | Type | Description | Default |
| --- | --- | --- | --- |
| timesteps | int | The number of timesteps for the schedule | None |
| s | float | A small constant used in the calculation | 0.008 |

## Usage
| Return | Type | Description |
| --- | --- | --- |
| betas | torch.Tensor | The computed beta values for each timestep |

Here are 3 examples of how to use the `cosine_beta_schedule` function:
## Example

### Example 1

In this example, we're generating a cosine beta schedule for 10 timesteps without an offset.
Import necessary library:

```python
import torch
from zeta.utils import cosine_beta_schedule

timesteps = 10
cosine_schedule = cosine_beta_schedule(timesteps)
print(cosine_schedule)
```

### Example 2

In this example, we're generating a cosine beta schedule for a specific timeframe with a custom offset.
Create an instance and use the function:

```python
import torch
from zeta.utils import cosine_beta_schedule
beta_values = cosine_beta_schedule(1000)

timesteps = 1000
offset = 0.005
cosine_schedule = cosine_beta_schedule(timesteps, s=offset)
print(cosine_schedule)
# To access the beta value at timestep t=500
print(beta_values[500])
```

### Example 3
In the above code, `cosine_beta_schedule` function generates `beta_values` for the given number of timesteps (1000). The beta value at a particular timestep can be assessed by index.

## Description

Essentially, this function generates a schedule based on the cosine beta function. This can be used to control the learning process in training algorithms. The function uses two parameters: `timesteps` and `s`.

The `timesteps` parameter is an integer representing the number of time intervals. The `s` parameter is a small constant used in the calculation to ensure numerical stability and it helps to control the shape of the beta schedule. In the function, `s` defaults to `0.008` if not provided.

The function first creates a 1D tensor `x` with elements from `0` to `timesteps` and then calculates cumulative product of alphas using cosine function on `x`. The calculated values form a sequence which is then normalized by the first element. Finally, the function computes the `beta_values` which are differences between subsequent alphas and clips the values between 0 and 0.9999. These `beta_values` are returned as a tensor.

This function assures that the return `beta_values` gradually decrease from 1 towards 0 as the timesteps progress, thus controlling the scheduling process in the learning algorithms. The rate of the decrease in the `beta_values` is influenced by the `s` parameter and can be adjusted by the user.

## Note

1. Be careful when selecting the number of timesteps. Higher timesteps might lead to a more finely tuned beta schedule, but it would also require more computational resources.
2. The `s` parameter affects the shape of the beta schedule. Adjust it according to your need.

In this example, we're using cosine beta schedule as a learning rate scheduler in a PyTorch training loop
For further understanding and usage of this function, refer to the PyTorch documentation and communities.
Loading

0 comments on commit d7003e1

Please sign in to comment.