ASL2Text is a project by GDSC LNMIIT that aims to convert American Sign Language (ASL) gestures from a video input to text output. The project uses deep learning models to detect and recognize the hand and body movements of the signer, and then translates them into natural language sentences. The project can be used to facilitate communication between deaf and hearing people, as well as to provide accessibility and education for ASL learners.