This repository tracks releases of the HammerBlade source code and infrastructure. It can be used to:
-
Simulate HammerBlade Nodes of diverse sizes and memory types
-
Generate HammerBlade Amazon FPGA Images
-
Create Amazon Machine Images with pre-installed tools and libraries
HammerBlade is an open-source manycore architecture for performing efficient computation on large general-purpose workloads. A HammerBlade is composed of nodes attached to a general purpose host, simliar to a general-purpose GPU. Each node is a single an array of tiles interconnected by a 2-D mesh network attached to a flexible memory system.
HammerBlade is a Single-Program, Multiple-Data (SPMD) architecture: All tiles execute the same program on a different set of input data to complete a larger computation kernel. Programs are written in the CUDA-Lite lanaguage (C/C++) and executed on the tiles in parallel "groups", and sequential "grids". The CUDA-Lite host runtime (C/C++) manages execution parallel and sequential execution.
The HammerBlade is being integrated with higher-level parallel frameworks and Domain-Specific Languages. A Pytorch Pytorch backend is being developed to accelerate Machine Learning and a Graphit code-generator is being developed to support Graph Computations.
C/C++, Python, and Pytorch programs can interact with a Cooperatively Simulated (Cosimulated) HammerBlade Node using Synopysis VCS or Verilator. The HammerBlade Runtime and Cosimulation top levels are in BSG Replicant repository.
For a more in-depth overview of the HammerBlade architecture, see the HammerBlade Overview.
The architectural HDL for HammerBlade is in the BSG Manycore Repository and the BaseJump STL repositories. For technical details about the HammerBlade architecture, see the HammerBlade Technical Reference Manual
To run simulated applications on HammerBlade, or build FPGA images from this repository, follow the instructions below:
-
To simulate with VCS you must have VCS-MX installed. (After 2019, VCS-MX is included with VCS)
-
Verilator is built as part of the setup process.
-
To simulate or compile our AWS design, you must have Vivado 2019.1 installed and correctly configured in your environment. The Vivado tools must have the Virtex Ultrascale + Family device files installed. See page 40 in this guide If you are using Vivado 2019.1 you will need to apply the following AR before running simulation: https://www.xilinx.com/support/answers/72404.html.
The Makefiles will warn/fail if it cannot find the appropriate tools.
Building the RISC-V Toolchain requires several distribution packages. The following are required for CentOS/RHEL-based distributions:
libmpc autoconf automake libtool curl gmp gawk bison flex texinfo gperf expat-devel dtc cmake3 python3-devel
On debian-based distributions, the following packages are required:
libmpc-dev autoconf automake libtool curl libgmp-dev gawk bison flex texinfo gperf libexpat-dev device-tree-compiler cmake build-essential python3-dev
Non-Bespoke Silicon Group (BSG) users MUST have Vivado and VCS installed before these steps
The default VCS environment simulates the manycore architecture, without any closed-source or encrypted IP.
-
Initialize the submodules:
git submodule update --init --recursive
-
(BSG Users Only:
git clone [email protected]:taylor-bsg/bsg_cadenv.git
) -
Run
make -f amibuild.mk riscv-tools
Verilator simulates the HammerBlade architecture using C/C++ DPI functions instead of AWS F1 and Vivado IP.
-
Initialize the submodules:
git submodule update --init --recursive
-
Run
make verilator-exe
-
Run
make -f amibuild.mk riscv-tools
Non-Bespoke Silicon Group (BSG) users MUST have Vivado and VCS installed before these steps
VCS simulates the FPGA design that is compiled for AWS F1 and uses Vivado IP.
-
Initialize the submodules:
git submodule update --init --recursive
-
(BSG Users Only:
git clone [email protected]:taylor-bsg/bsg_cadenv.git
) -
Run
make aws-fpga.setup.log
-
Run
make -f amibuild.mk riscv-tools
Makefile targets
-
setup
: Build all tools and updates necessary for cosimulation -
build-ami
: Builds the Amazon Machine Image (AMI) and emits the AMI ID. -
build-tarball
: Compiles the manycore design (locally) as a tarball -
build-afi
: Uploads a Design Checkpoint (DCP) to AWS and processes it into an Amazon FPGA Image (AFI) with an Amazon Global FPGA Image ID (AGFI) -
print-ami
: Prints the current AMI whose version matchesFPGA_IMAGE_VERSION
in project.mkYou can also run
make help
to see all of the available targets in this repository.
-
Makefile provides targets cloning repositories and building new Amazon Machine images. See the section on Makefile Targets for more information.
-
amibuild.mk provides targets for building and installing the manycore tools on a Amazon EC2 instance. Indirectly used by the target
build-ami
in Makefile. -
project.mk defines paths to each of the submodule dependencies
-
scripts: Scripts used to upload Amazon FPGA images (AFIs) and configure Amazon Machine Images (AMIs).