Skip to content

CMPSC-310-AI-Spring2023/activity15_transformers-kemandel

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Review Assignment Due Date

CMPSC 310 Activity 15

Deadline: April 12 by 9:50am

Assignment

For this activity follow Neural machine translation with a Transformer and Keras.

Submission

Submit completed Colab notebook showing generated output.

Description

Embedding and Positional Encoding

This layer takes a set of tokens in and converts them to vectors. Since the attentional layers use these vectors in no particular order and in parallel, we use positional encoding to define the order of the words being used.

Attention Layers

The model contatins many basic attention layers. These layers are used as a type of fuzzy, non-static dictionary lookup. This allows the model to provide an input sequence and be returned a query.

Cross Attention Layer

This attention layer is similar to the others, but it uses attention to instead connect the encoder and decoder.

Global Self-Attention Layer

This attention layer processes the context sequence, and propogates information along its length. Use attention here as opposed to RNNs and CNNs allows the model to be used in parallel. This allows us to move the work to GPU hardware, massively improving performance and efficiency.

About

activity15_transformers-kemandel created by GitHub Classroom

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published