Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[merge] ABC & MNIST example #55

Draft
wants to merge 84 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
84 commits
Select commit Hold shift + click to select a range
f34238d
[nn] Added ability to gather activation stastitics during LUT inference.
nickfraser May 10, 2021
aa622a4
[jsc] Added script to save LUTs and activation statistics.
nickfraser May 10, 2021
4d75911
[nn] Added extra parameter to specify a cutoff, for if a TT entry sho…
nickfraser May 18, 2021
5382815
[jsc] Made verification of verilog simulation optional with a flag.
nickfraser May 18, 2021
45d7955
[jsc] Added loading of calculated histograms and specifying a TT freq…
nickfraser May 18, 2021
dc302d1
[nn] Added a default case to verilog LUT generation.
nickfraser May 19, 2021
536271b
[nn/jsc] Made registers optional in verilog generation. Default is no…
nickfraser May 20, 2021
f4e7810
[verilog] Added 'parallel case' statement to generated verilog.
nickfraser May 26, 2021
40d72e5
Revert "[verilog] Added 'parallel case' statement to generated verilog."
nickfraser May 31, 2021
781c49f
Merge remote-tracking branch 'public/master' into feat/track_cares
nickfraser Jun 18, 2021
7970fce
[jsc] Bugfixes in setting histograms / frequency values
nickfraser Jun 18, 2021
3ce8395
Merge remote-tracking branch 'public/master' into feat/track_cares
nickfraser Jun 25, 2021
fbf8238
[jsc] Updated default PCA to be 12 dimensions.
nickfraser Jun 28, 2021
01f8d43
[jsc] Fixed description of commandline arguments
nickfraser Aug 31, 2021
4c3ec04
Merge branch 'master' into feat/track_cares
nickfraser Sep 3, 2021
1ed85a5
[jsc] Initial basic code for abc intergration.
nickfraser Sep 28, 2021
2857e71
[abc] Updated module to pull information from the input model. Create…
nickfraser Sep 28, 2021
52f8c64
Added AMC depencency to Dockerfile.
nickfraser Nov 2, 2021
c7be6c6
Merge branch 'master' into feat/track_cares
nickfraser Aug 18, 2022
691944e
[nids] Initial version supporting histograms.
nickfraser Aug 18, 2022
15600a6
Merge branch 'master' into feat/track_cares
nickfraser Oct 10, 2022
91aeb19
Merge remote-tracking branch 'origin/feat/track_cares' into feat/abc_…
nickfraser Oct 11, 2022
60e1213
[abc] Updated script generation function to support specifying the ra…
nickfraser Oct 11, 2022
5347456
[docker] Updated ABC version.
nickfraser Oct 11, 2022
f32fe89
[abc] Initial ABC synthesis flow. Need to fetch results from output s…
nickfraser Oct 12, 2022
5da1be5
[abc] Added option to specify the BDD command for synthesis preset.
nickfraser Oct 12, 2022
580a04f
[abc] Used re to extract important information for the ABC logs.
nickfraser Oct 12, 2022
2b4273e
[abc] Added functions for generic synthesis optimizations and final t…
nickfraser Oct 13, 2022
3149493
[abc] Updated simulation/evaluation to work on blif models.
nickfraser Oct 13, 2022
d345c38
[abc] Bugfixes for putontop commands.
nickfraser Oct 18, 2022
2273631
[abc] Disabled print of the best model in the iterative optimizer.
nickfraser Oct 18, 2022
0d01979
[abc] Added PyVerilator compatible verilog wrapper and post-process f…
nickfraser Oct 18, 2022
ebdb1f0
[synthesis] Updated ABC synthesis to fix the generated verilog and ge…
nickfraser Oct 18, 2022
71bc3d7
[jsc] Updated neq2lut_abc scripts to work with new end-to-end ABC syn…
nickfraser Oct 18, 2022
63d7547
[jsc] Added scripts to convert blif->verilog, and to test the verilog.
nickfraser Oct 27, 2022
98f8342
[abc] Added option to specify the mfs2 command and mapping command.
nickfraser Oct 27, 2022
a39ebf6
Merge branch 'feat/abc_integration' into feat/abc_integration_test_blif
nickfraser Oct 27, 2022
b63c3f6
[mnist] Initial train scripts and model definitions.
nickfraser Jan 4, 2023
1a29d68
[mnist/requirements] Added a requirements file for MNIST.
nickfraser Jan 4, 2023
f31c773
[mnist/readme] Added basic README template for MNIST.
nickfraser Jan 4, 2023
abf5fbb
[mnist] Removed references to the dataset config.
nickfraser Jan 4, 2023
ade9418
[mnist] Bugfix in default values for arch.
nickfraser Jan 4, 2023
54c6d5c
[mnist] Added first version of dataset_dump file, ready for testing.
nickfraser Jan 4, 2023
0c3bcf6
[mnist] First version of MNIST-S. ~96% accuracy.
nickfraser Jan 5, 2023
8a7d581
[examples/mnist] Added v1.1 models and seeds s-1.1, m-1.1, l-1.1 get …
nickfraser Feb 2, 2023
abc13e3
[ex/mnist] Added input_dropout as a configurable parameter
nickfraser Feb 28, 2023
1feca62
[ex/mnist] Updated seed for mnist-m
nickfraser Feb 28, 2023
aca10f1
[ex/mnist] Added very small configs
nickfraser Feb 28, 2023
2f90b13
Merge branch 'feat/track_cares' into merge/mnist_track_cares
nickfraser Feb 28, 2023
0de0ba2
[ex/mnist] Updated mnist example to track cares
nickfraser Feb 28, 2023
811f65f
[ex/mnist] Added dropout parameter to dataset/lut dumping scripts
nickfraser Mar 1, 2023
0a1715f
[ex/mnist] Bugfix in dataloader instantiation in the LUT dumping script
nickfraser Mar 1, 2023
3bb764c
[ex/mnist] Added extra M/L configurations
nickfraser Mar 3, 2023
58f2df9
[ex/mnist] Updated seeds for m-1.3, l-1.2, l-1.3 for (97.42, 97.63, 9…
nickfraser Mar 3, 2023
c5c2add
[ex/mnist] Allowed dataloaders to use a few threads
nickfraser Mar 3, 2023
ddf9f48
[ex/mnist] Added random cropping and random rotations to image prepro…
nickfraser Mar 3, 2023
598ed21
Merge branch 'master' into feat/track_cares
nickfraser Mar 3, 2023
26a6d36
[ex/mnist] Updated training preprocessing tranforms
nickfraser Mar 4, 2023
410d493
[abc] Updated pipelining to return #nodes
nickfraser Mar 4, 2023
25b7fab
Merge branch 'feat/abc_integration' into feat/abc_integration_test_blif
nickfraser Mar 4, 2023
c921b1e
[ex/jsc] Bugfix when simulating a pipelined design
nickfraser Mar 5, 2023
0740d26
[verilog] Adds missing clock from ABC-generated verilog, if necessary
nickfraser Mar 5, 2023
0e63348
Merge branch 'feat/abc_integration' into feat/abc_integration_test_blif
nickfraser Mar 5, 2023
bc34da0
[ex/jsc] Added FPGA synthesis script
nickfraser Mar 5, 2023
f04214a
Merge branch 'feat/track_cares' into feat/abc_integration
nickfraser Mar 5, 2023
ccd3248
Merge branch 'feat/abc_integration' into feat/abc_integration_test_blif
nickfraser Mar 5, 2023
7e83a9c
[ex/jsc] Bugfix / added AVG ROC-AUC to results
nickfraser Mar 5, 2023
43251e8
Merge branch 'feat/track_cares' into feat/abc_integration
nickfraser Mar 5, 2023
e550cfe
Merge branch 'feat/abc_integration' into feat/abc_integration_test_blif
nickfraser Mar 5, 2023
3b53131
[ex/jsc] Bugfix: added AVG ROC-AUC results
nickfraser Mar 5, 2023
3074012
Merge branch 'feat/abc_integration' into feat/abc_integration_test_blif
nickfraser Mar 5, 2023
b42cea7
[ex/jsc] Bugfix: support measuring AVG ROC-AUC
nickfraser Mar 5, 2023
7f0b365
Merge remote-tracking branch 'origin/feat/abc_integration_test_blif' …
nickfraser Mar 6, 2023
70fb8bf
[ex/mnist] Initial version supporting ABC flow
nickfraser Mar 6, 2023
9f40656
[ex/mnist] Initial version supporting BLIF conversion and testing
nickfraser Mar 6, 2023
e1b3d16
[ex/mnist] Bugfix in dataset loader
nickfraser Mar 6, 2023
c3b33a9
[ex/mnist] Bugfix in dataset configuration
nickfraser Mar 6, 2023
3bf4be7
[abc] Added patch for ABC when input bits>1000
nickfraser Sep 12, 2024
87a6b63
Merge branch 'master' into merge/mnist_track_cares_abc_integration_te…
nickfraser Nov 14, 2024
3713b6d
[example/jsc] Bugfix
nickfraser Nov 18, 2024
0165476
[example/jsc] Only insert timescale if registers
nickfraser Nov 18, 2024
fec8657
[docker] Bugfixes to ABC build
nickfraser Nov 18, 2024
d835355
Merge branch 'feat/track_cares' into feat/abc_integration_test_blif
nickfraser Nov 18, 2024
77564f9
Merge branch 'feat/abc_integration_test_blif' into merge/mnist_track_…
nickfraser Nov 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion docker/Dockerfile.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ RUN apt-get -qq update && apt-get -qq -y install curl bzip2 \
&& rm -rf /var/lib/apt/lists/* /var/log/dpkg.log

# Install LogicNets system prerequisites
RUN apt-get -qq update && apt-get -qq -y install verilator build-essential libx11-6 git \
RUN apt-get -qq update && apt-get -qq -y install verilator build-essential libx11-6 git libreadline-dev \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/* /var/log/dpkg.log

Expand All @@ -41,6 +41,16 @@ ENV OHMYXILINX=/workspace/oh-my-xilinx
RUN git clone https://github.com/dirjud/Nitro-Parts-lib-Xilinx.git
ENV NITROPARTSLIB=/workspace/Nitro-Parts-lib-Xilinx

# Adding LogicNets dependency on ABC
COPY examples/mnist/abc.patch /workspace/
RUN git clone https://github.com/berkeley-abc/abc.git \
&& cd abc \
&& git checkout 813a0f1ff1ae7512cb7947f54cd3f2ab252848c8 \
&& git apply /workspace/abc.patch \
&& rm -f /workspace/abc.patch \
&& make -j`nproc`
ENV ABC_ROOT=/workspace/abc

# Create the user account to run LogicNets
RUN groupadd -g $GID $GNAME
RUN useradd -m -u $UID $UNAME -g $GNAME
Expand Down
119 changes: 119 additions & 0 deletions examples/cybersecurity/dump_luts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Copyright (C) 2021 Xilinx, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
from argparse import ArgumentParser

import torch
from torch.utils.data import DataLoader

from logicnets.nn import generate_truth_tables, \
lut_inference, \
save_luts, \
module_list_to_verilog_module

from train import configs, model_config, dataset_config, other_options, test
from dataset import get_preqnt_dataset
from models import UnswNb15NeqModel, UnswNb15LutModel

if __name__ == "__main__":
parser = ArgumentParser(description="Generate histograms of states used throughout LogicNets")
parser.add_argument('--arch', type=str, choices=configs.keys(), default="jsc-s",
help="Specific the neural network model to use (default: %(default)s)")
parser.add_argument('--batch-size', type=int, default=None, metavar='N',
help="Batch size for evaluation (default: %(default)s)")
parser.add_argument('--input-bitwidth', type=int, default=None,
help="Bitwidth to use at the input (default: %(default)s)")
parser.add_argument('--hidden-bitwidth', type=int, default=None,
help="Bitwidth to use for activations in hidden layers (default: %(default)s)")
parser.add_argument('--output-bitwidth', type=int, default=None,
help="Bitwidth to use at the output (default: %(default)s)")
parser.add_argument('--input-fanin', type=int, default=None,
help="Fanin to use at the input (default: %(default)s)")
parser.add_argument('--hidden-fanin', type=int, default=None,
help="Fanin to use for the hidden layers (default: %(default)s)")
parser.add_argument('--output-fanin', type=int, default=None,
help="Fanin to use at the output (default: %(default)s)")
parser.add_argument('--hidden-layers', nargs='+', type=int, default=None,
help="A list of hidden layer neuron sizes (default: %(default)s)")
parser.add_argument('--dataset-file', type=str, default='data/unsw_nb15_binarized.npz',
help="The file to use as the dataset input (default: %(default)s)")
parser.add_argument('--log-dir', type=str, default='./log',
help="A location to store the calculated histograms (default: %(default)s)")
parser.add_argument('--checkpoint', type=str, required=True,
help="The checkpoint file which contains the model weights")
args = parser.parse_args()
defaults = configs[args.arch]
options = vars(args)
del options['arch']
config = {}
for k in options.keys():
config[k] = options[k] if options[k] is not None else defaults[k] # Override defaults, if specified.

if not os.path.exists(config['log_dir']):
os.makedirs(config['log_dir'])

# Split up configuration options to be more understandable
model_cfg = {}
for k in model_config.keys():
model_cfg[k] = config[k]
dataset_cfg = {}
for k in dataset_config.keys():
dataset_cfg[k] = config[k]
options_cfg = {}
for k in other_options.keys():
if k == 'cuda':
continue
options_cfg[k] = config[k]

# Fetch the test set
dataset = {}
dataset['train'] = get_preqnt_dataset(dataset_cfg['dataset_file'], split='train')
train_loader = DataLoader(dataset["train"], batch_size=config['batch_size'], shuffle=False)

# Instantiate the PyTorch model
x, y = dataset['train'][0]
dataset_length = len(dataset['train'])
model_cfg['input_length'] = len(x)
model_cfg['output_length'] = 1
model = UnswNb15NeqModel(model_cfg)

# Load the model weights
checkpoint = torch.load(options_cfg['checkpoint'], map_location='cpu')
model.load_state_dict(checkpoint['model_dict'])

# Test the PyTorch model
print("Running inference of baseline model on training set (%d examples)..." % (dataset_length))
model.eval()
baseline_accuracy = test(model, train_loader, cuda=False)
print("Baseline accuracy: %f" % (baseline_accuracy))

# Instantiate LUT-based model
lut_model = UnswNb15LutModel(model_cfg)
lut_model.load_state_dict(checkpoint['model_dict'])

# Generate the truth tables in the LUT module
print("Converting to NEQs to LUTs...")
generate_truth_tables(lut_model, verbose=True)

# Test the LUT-based model
print("Running inference of LUT-based model training set (%d examples)..." % (dataset_length))
lut_inference(lut_model, track_used_luts=True)
lut_model.eval()
lut_accuracy = test(lut_model, train_loader, cuda=False)
print("LUT-Based Model accuracy: %f" % (lut_accuracy))
print("Saving LUTs to %s... " % (options_cfg["log_dir"] + "/luts.pth"))
save_luts(lut_model, options_cfg["log_dir"] + "/luts.pth")
print("Done!")

17 changes: 10 additions & 7 deletions examples/cybersecurity/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,15 @@ def __init__(self, model_config):
self.verilog_dir = None
self.top_module_filename = None
self.dut = None
self.verify = True
self.logfile = None

def verilog_inference(self, verilog_dir, top_module_filename, logfile: bool = False, add_registers: bool = False):
def verilog_inference(self, verilog_dir, top_module_filename, logfile: bool = False, add_registers: bool = False, verify: bool = True):
self.verilog_dir = realpath(verilog_dir)
self.top_module_filename = top_module_filename
self.dut = PyVerilator.build(f"{self.verilog_dir}/{self.top_module_filename}", verilog_path=[self.verilog_dir], build_dir=f"{self.verilog_dir}/verilator")
self.dut = PyVerilator.build(f"{self.verilog_dir}/{self.top_module_filename}", verilog_path=[self.verilog_dir], build_dir=f"{self.verilog_dir}/verilator", command_args=("--x-assign","0",))
self.is_verilog_inference = True
self.verify = verify
self.logfile = logfile
if add_registers:
self.latency = len(self.num_neurons)
Expand All @@ -95,21 +97,22 @@ def verilog_forward(self, x):
self.dut.io.clk = 0
for i in range(x.shape[0]):
x_i = x[i,:]
y_i = self.pytorch_forward(x[i:i+1,:])[0]
xv_i = list(map(lambda z: input_quant.get_bin_str(z), x_i))
ys_i = list(map(lambda z: output_quant.get_bin_str(z), y_i))
xvc_i = reduce(lambda a,b: a+b, xv_i[::-1])
ysc_i = reduce(lambda a,b: a+b, ys_i[::-1])
self.dut["M0"] = int(xvc_i, 2)
for j in range(self.latency + 1):
#print(self.dut.io.M5)
res = self.dut[f"M{num_layers}"]
result = f"{res:0{int(total_output_bits)}b}"
self.dut.io.clk = 1
self.dut.io.clk = 0
expected = f"{int(ysc_i,2):0{int(total_output_bits)}b}"
result = f"{res:0{int(total_output_bits)}b}"
assert(expected == result)
if self.verify:
y_i = self.pytorch_forward(x[i:i+1,:])[0]
ys_i = list(map(lambda z: output_quant.get_bin_str(z), y_i))
ysc_i = reduce(lambda a,b: a+b, ys_i[::-1])
expected = f"{int(ysc_i,2):0{int(total_output_bits)}b}"
assert(expected == result)
res_split = [result[i:i+output_bitwidth] for i in range(0, len(result), output_bitwidth)][::-1]
yv_i = torch.Tensor(list(map(lambda z: int(z, 2), res_split)))
y[i,:] = yv_i
Expand Down
19 changes: 15 additions & 4 deletions examples/cybersecurity/neq2lut.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@

from logicnets.nn import generate_truth_tables, \
lut_inference, \
module_list_to_verilog_module
module_list_to_verilog_module, \
load_histograms
from logicnets.synthesis import synthesize_and_get_resource_counts
from logicnets.util import proc_postsynth_file

Expand All @@ -34,6 +35,8 @@
"checkpoint": None,
"generate_bench": False,
"add_registers": False,
"histograms": None,
"freq_thresh": None,
"simulate_pre_synthesis_verilog": False,
"simulate_post_synthesis_verilog": False,
}
Expand Down Expand Up @@ -68,6 +71,10 @@
help="A location to store the log output of the training run and the output model (default: %(default)s)")
parser.add_argument('--checkpoint', type=str, required=True,
help="The checkpoint file which contains the model weights")
parser.add_argument('--histograms', type=str, default=None,
help="The checkpoint histograms of LUT usage (default: %(default)s)")
parser.add_argument('--freq-thresh', type=int, default=None,
help="Threshold to use to include this truth table into the model (default: %(default)s)")
parser.add_argument('--generate-bench', action='store_true', default=False,
help="Generate the truth table in BENCH format as well as verilog (default: %(default)s)")
parser.add_argument('--dump-io', action='store_true', default=False,
Expand Down Expand Up @@ -143,9 +150,12 @@
'test_accuracy': lut_accuracy}

torch.save(modelSave, options_cfg["log_dir"] + "/lut_based_model.pth")
if options_cfg["histograms"] is not None:
luts = torch.load(options_cfg["histograms"])
load_histograms(lut_model, luts)

print("Generating verilog in %s..." % (options_cfg["log_dir"]))
module_list_to_verilog_module(lut_model.module_list, "logicnet", options_cfg["log_dir"], generate_bench=options_cfg["generate_bench"], add_registers=options_cfg["add_registers"])
module_list_to_verilog_module(lut_model.module_list, "logicnet", options_cfg["log_dir"], generate_bench=options_cfg["generate_bench"], add_registers=options_cfg["add_registers"], freq_thresh=options_cfg["freq_thresh"])
print("Top level entity stored at: %s/logicnet.v ..." % (options_cfg["log_dir"]))

if args.dump_io:
Expand All @@ -156,9 +166,10 @@
else:
io_filename = None


if args.simulate_pre_synthesis_verilog:
print("Running inference simulation of Verilog-based model...")
lut_model.verilog_inference(options_cfg["log_dir"], "logicnet.v", logfile=io_filename, add_registers=options_cfg["add_registers"])
lut_model.verilog_inference(options_cfg["log_dir"], "logicnet.v", logfile=io_filename, add_registers=options_cfg["add_registers"], verify=options_cfg["freq_thresh"] is None or options_cfg["freq_thresh"] == 0)
verilog_accuracy = test(lut_model, test_loader, cuda=False)
print("Verilog-Based Model accuracy: %f" % (verilog_accuracy))

Expand All @@ -168,7 +179,7 @@
if args.simulate_post_synthesis_verilog:
print("Running post-synthesis inference simulation of Verilog-based model...")
proc_postsynth_file(options_cfg["log_dir"])
lut_model.verilog_inference(options_cfg["log_dir"]+"/post_synth", "logicnet_post_synth.v", io_filename, add_registers=options_cfg["add_registers"])
lut_model.verilog_inference(options_cfg["log_dir"]+"/post_synth", "logicnet_post_synth.v", io_filename, add_registers=options_cfg["add_registers"], verify=options_cfg["freq_thresh"] is None or options_cfg["freq_thresh"] == 0)
post_synth_accuracy = test(lut_model, test_loader, cuda=False)
print("Post-synthesis Verilog-Based Model accuracy: %f" % (post_synth_accuracy))

12 changes: 12 additions & 0 deletions examples/cybersecurity/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@
"learning_rate": 1e-1,
"seed": 109,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-s-comp": {
"hidden_layers": [49, 7],
Expand All @@ -59,6 +61,8 @@
"learning_rate": 1e-1,
"seed": 81,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-m": {
"hidden_layers": [593, 256, 128, 128],
Expand All @@ -74,6 +78,8 @@
"learning_rate": 1e-1,
"seed": 196,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-m-comp": {
"hidden_layers": [593, 256, 49, 7],
Expand All @@ -89,6 +95,8 @@
"learning_rate": 1e-1,
"seed": 40,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-l": {
"hidden_layers": [593, 100, 100, 100],
Expand All @@ -104,6 +112,8 @@
"learning_rate": 1e-1,
"seed": 2,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-l-comp": {
"hidden_layers": [593, 100, 25, 5],
Expand All @@ -119,6 +129,8 @@
"learning_rate": 1e-1,
"seed": 83,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
}

Expand Down
29 changes: 29 additions & 0 deletions examples/jet_substructure/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,3 +72,32 @@ our paper below:
}
```

## Testing BLIF Files on the JSC Dataset

In this section, we show how to take technology-mapped BLIF files,
generate technology-mapped verilog and simulate the verilog on the JSC dataset.

### Convert BLIF Files into Verilog

To convert the full BLIF files (as generated from the LogicNets examples, via `neq2lut_abc.py`) into verilog, run the following:

```bash
python blif2verilog.py --arch <jsc-s|jsc-m|jsc-l> --input-blif <path_to_tech_mapped_blif>/layers_full_opt.blif --output-directory <output_directory>
```

To convert the layer-wise BLIF files into verilog, run the following:

```bash
python blif2verilog.py --arch <jsc-s|jsc-m|jsc-l> --input-blifs <path_to_tech_mapped_blif>/*.blif --output-directory <output_directory> --generated-module-name-prefix layer0
```

Note, the generated module name prefix will likely have to change if the source files are handled in a different way.

### Simulate Verilog

The resultant verilog can be simulated as follows:

```bash
python simulate_verilog.py --arch <jsc-s|jsc-m|jsc-l> --checkpoint <path_to_checkpoint> --input-verilog <output_directory>/logicnet.v
```

Loading