Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[abc] Add integration with ABC #53

Draft
wants to merge 43 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
f34238d
[nn] Added ability to gather activation stastitics during LUT inference.
nickfraser May 10, 2021
aa622a4
[jsc] Added script to save LUTs and activation statistics.
nickfraser May 10, 2021
4d75911
[nn] Added extra parameter to specify a cutoff, for if a TT entry sho…
nickfraser May 18, 2021
5382815
[jsc] Made verification of verilog simulation optional with a flag.
nickfraser May 18, 2021
45d7955
[jsc] Added loading of calculated histograms and specifying a TT freq…
nickfraser May 18, 2021
dc302d1
[nn] Added a default case to verilog LUT generation.
nickfraser May 19, 2021
536271b
[nn/jsc] Made registers optional in verilog generation. Default is no…
nickfraser May 20, 2021
f4e7810
[verilog] Added 'parallel case' statement to generated verilog.
nickfraser May 26, 2021
40d72e5
Revert "[verilog] Added 'parallel case' statement to generated verilog."
nickfraser May 31, 2021
781c49f
Merge remote-tracking branch 'public/master' into feat/track_cares
nickfraser Jun 18, 2021
7970fce
[jsc] Bugfixes in setting histograms / frequency values
nickfraser Jun 18, 2021
3ce8395
Merge remote-tracking branch 'public/master' into feat/track_cares
nickfraser Jun 25, 2021
fbf8238
[jsc] Updated default PCA to be 12 dimensions.
nickfraser Jun 28, 2021
01f8d43
[jsc] Fixed description of commandline arguments
nickfraser Aug 31, 2021
4c3ec04
Merge branch 'master' into feat/track_cares
nickfraser Sep 3, 2021
1ed85a5
[jsc] Initial basic code for abc intergration.
nickfraser Sep 28, 2021
2857e71
[abc] Updated module to pull information from the input model. Create…
nickfraser Sep 28, 2021
52f8c64
Added AMC depencency to Dockerfile.
nickfraser Nov 2, 2021
c7be6c6
Merge branch 'master' into feat/track_cares
nickfraser Aug 18, 2022
691944e
[nids] Initial version supporting histograms.
nickfraser Aug 18, 2022
15600a6
Merge branch 'master' into feat/track_cares
nickfraser Oct 10, 2022
91aeb19
Merge remote-tracking branch 'origin/feat/track_cares' into feat/abc_…
nickfraser Oct 11, 2022
60e1213
[abc] Updated script generation function to support specifying the ra…
nickfraser Oct 11, 2022
5347456
[docker] Updated ABC version.
nickfraser Oct 11, 2022
f32fe89
[abc] Initial ABC synthesis flow. Need to fetch results from output s…
nickfraser Oct 12, 2022
5da1be5
[abc] Added option to specify the BDD command for synthesis preset.
nickfraser Oct 12, 2022
580a04f
[abc] Used re to extract important information for the ABC logs.
nickfraser Oct 12, 2022
2b4273e
[abc] Added functions for generic synthesis optimizations and final t…
nickfraser Oct 13, 2022
3149493
[abc] Updated simulation/evaluation to work on blif models.
nickfraser Oct 13, 2022
d345c38
[abc] Bugfixes for putontop commands.
nickfraser Oct 18, 2022
2273631
[abc] Disabled print of the best model in the iterative optimizer.
nickfraser Oct 18, 2022
0d01979
[abc] Added PyVerilator compatible verilog wrapper and post-process f…
nickfraser Oct 18, 2022
ebdb1f0
[synthesis] Updated ABC synthesis to fix the generated verilog and ge…
nickfraser Oct 18, 2022
71bc3d7
[jsc] Updated neq2lut_abc scripts to work with new end-to-end ABC syn…
nickfraser Oct 18, 2022
98f8342
[abc] Added option to specify the mfs2 command and mapping command.
nickfraser Oct 27, 2022
598ed21
Merge branch 'master' into feat/track_cares
nickfraser Mar 3, 2023
410d493
[abc] Updated pipelining to return #nodes
nickfraser Mar 4, 2023
0740d26
[verilog] Adds missing clock from ABC-generated verilog, if necessary
nickfraser Mar 5, 2023
f04214a
Merge branch 'feat/track_cares' into feat/abc_integration
nickfraser Mar 5, 2023
7e83a9c
[ex/jsc] Bugfix / added AVG ROC-AUC to results
nickfraser Mar 5, 2023
43251e8
Merge branch 'feat/track_cares' into feat/abc_integration
nickfraser Mar 5, 2023
3b53131
[ex/jsc] Bugfix: added AVG ROC-AUC results
nickfraser Mar 5, 2023
1e0c9f2
[docker] Bugfixes to ABC build
nickfraser Nov 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion docker/Dockerfile.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ RUN apt-get -qq update && apt-get -qq -y install curl bzip2 \
&& rm -rf /var/lib/apt/lists/* /var/log/dpkg.log

# Install LogicNets system prerequisites
RUN apt-get -qq update && apt-get -qq -y install verilator build-essential libx11-6 git \
RUN apt-get -qq update && apt-get -qq -y install verilator build-essential libx11-6 git libreadline-dev \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/* /var/log/dpkg.log

Expand All @@ -41,6 +41,16 @@ ENV OHMYXILINX=/workspace/oh-my-xilinx
RUN git clone https://github.com/dirjud/Nitro-Parts-lib-Xilinx.git
ENV NITROPARTSLIB=/workspace/Nitro-Parts-lib-Xilinx

# Adding LogicNets dependency on ABC
COPY examples/mnist/abc.patch /workspace/
RUN git clone https://github.com/berkeley-abc/abc.git \
&& cd abc \
&& git checkout 813a0f1ff1ae7512cb7947f54cd3f2ab252848c8 \
&& git apply /workspace/abc.patch \
&& rm -f /workspace/abc.patch \
&& make -j`nproc`
ENV ABC_ROOT=/workspace/abc

# Create the user account to run LogicNets
RUN groupadd -g $GID $GNAME
RUN useradd -m -u $UID $UNAME -g $GNAME
Expand Down
119 changes: 119 additions & 0 deletions examples/cybersecurity/dump_luts.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Copyright (C) 2021 Xilinx, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
from argparse import ArgumentParser

import torch
from torch.utils.data import DataLoader

from logicnets.nn import generate_truth_tables, \
lut_inference, \
save_luts, \
module_list_to_verilog_module

from train import configs, model_config, dataset_config, other_options, test
from dataset import get_preqnt_dataset
from models import UnswNb15NeqModel, UnswNb15LutModel

if __name__ == "__main__":
parser = ArgumentParser(description="Generate histograms of states used throughout LogicNets")
parser.add_argument('--arch', type=str, choices=configs.keys(), default="jsc-s",
help="Specific the neural network model to use (default: %(default)s)")
parser.add_argument('--batch-size', type=int, default=None, metavar='N',
help="Batch size for evaluation (default: %(default)s)")
parser.add_argument('--input-bitwidth', type=int, default=None,
help="Bitwidth to use at the input (default: %(default)s)")
parser.add_argument('--hidden-bitwidth', type=int, default=None,
help="Bitwidth to use for activations in hidden layers (default: %(default)s)")
parser.add_argument('--output-bitwidth', type=int, default=None,
help="Bitwidth to use at the output (default: %(default)s)")
parser.add_argument('--input-fanin', type=int, default=None,
help="Fanin to use at the input (default: %(default)s)")
parser.add_argument('--hidden-fanin', type=int, default=None,
help="Fanin to use for the hidden layers (default: %(default)s)")
parser.add_argument('--output-fanin', type=int, default=None,
help="Fanin to use at the output (default: %(default)s)")
parser.add_argument('--hidden-layers', nargs='+', type=int, default=None,
help="A list of hidden layer neuron sizes (default: %(default)s)")
parser.add_argument('--dataset-file', type=str, default='data/unsw_nb15_binarized.npz',
help="The file to use as the dataset input (default: %(default)s)")
parser.add_argument('--log-dir', type=str, default='./log',
help="A location to store the calculated histograms (default: %(default)s)")
parser.add_argument('--checkpoint', type=str, required=True,
help="The checkpoint file which contains the model weights")
args = parser.parse_args()
defaults = configs[args.arch]
options = vars(args)
del options['arch']
config = {}
for k in options.keys():
config[k] = options[k] if options[k] is not None else defaults[k] # Override defaults, if specified.

if not os.path.exists(config['log_dir']):
os.makedirs(config['log_dir'])

# Split up configuration options to be more understandable
model_cfg = {}
for k in model_config.keys():
model_cfg[k] = config[k]
dataset_cfg = {}
for k in dataset_config.keys():
dataset_cfg[k] = config[k]
options_cfg = {}
for k in other_options.keys():
if k == 'cuda':
continue
options_cfg[k] = config[k]

# Fetch the test set
dataset = {}
dataset['train'] = get_preqnt_dataset(dataset_cfg['dataset_file'], split='train')
train_loader = DataLoader(dataset["train"], batch_size=config['batch_size'], shuffle=False)

# Instantiate the PyTorch model
x, y = dataset['train'][0]
dataset_length = len(dataset['train'])
model_cfg['input_length'] = len(x)
model_cfg['output_length'] = 1
model = UnswNb15NeqModel(model_cfg)

# Load the model weights
checkpoint = torch.load(options_cfg['checkpoint'], map_location='cpu')
model.load_state_dict(checkpoint['model_dict'])

# Test the PyTorch model
print("Running inference of baseline model on training set (%d examples)..." % (dataset_length))
model.eval()
baseline_accuracy = test(model, train_loader, cuda=False)
print("Baseline accuracy: %f" % (baseline_accuracy))

# Instantiate LUT-based model
lut_model = UnswNb15LutModel(model_cfg)
lut_model.load_state_dict(checkpoint['model_dict'])

# Generate the truth tables in the LUT module
print("Converting to NEQs to LUTs...")
generate_truth_tables(lut_model, verbose=True)

# Test the LUT-based model
print("Running inference of LUT-based model training set (%d examples)..." % (dataset_length))
lut_inference(lut_model, track_used_luts=True)
lut_model.eval()
lut_accuracy = test(lut_model, train_loader, cuda=False)
print("LUT-Based Model accuracy: %f" % (lut_accuracy))
print("Saving LUTs to %s... " % (options_cfg["log_dir"] + "/luts.pth"))
save_luts(lut_model, options_cfg["log_dir"] + "/luts.pth")
print("Done!")

17 changes: 10 additions & 7 deletions examples/cybersecurity/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,15 @@ def __init__(self, model_config):
self.verilog_dir = None
self.top_module_filename = None
self.dut = None
self.verify = True
self.logfile = None

def verilog_inference(self, verilog_dir, top_module_filename, logfile: bool = False, add_registers: bool = False):
def verilog_inference(self, verilog_dir, top_module_filename, logfile: bool = False, add_registers: bool = False, verify: bool = True):
self.verilog_dir = realpath(verilog_dir)
self.top_module_filename = top_module_filename
self.dut = PyVerilator.build(f"{self.verilog_dir}/{self.top_module_filename}", verilog_path=[self.verilog_dir], build_dir=f"{self.verilog_dir}/verilator")
self.dut = PyVerilator.build(f"{self.verilog_dir}/{self.top_module_filename}", verilog_path=[self.verilog_dir], build_dir=f"{self.verilog_dir}/verilator", command_args=("--x-assign","0",))
self.is_verilog_inference = True
self.verify = verify
self.logfile = logfile
if add_registers:
self.latency = len(self.num_neurons)
Expand All @@ -95,21 +97,22 @@ def verilog_forward(self, x):
self.dut.io.clk = 0
for i in range(x.shape[0]):
x_i = x[i,:]
y_i = self.pytorch_forward(x[i:i+1,:])[0]
xv_i = list(map(lambda z: input_quant.get_bin_str(z), x_i))
ys_i = list(map(lambda z: output_quant.get_bin_str(z), y_i))
xvc_i = reduce(lambda a,b: a+b, xv_i[::-1])
ysc_i = reduce(lambda a,b: a+b, ys_i[::-1])
self.dut["M0"] = int(xvc_i, 2)
for j in range(self.latency + 1):
#print(self.dut.io.M5)
res = self.dut[f"M{num_layers}"]
result = f"{res:0{int(total_output_bits)}b}"
self.dut.io.clk = 1
self.dut.io.clk = 0
expected = f"{int(ysc_i,2):0{int(total_output_bits)}b}"
result = f"{res:0{int(total_output_bits)}b}"
assert(expected == result)
if self.verify:
y_i = self.pytorch_forward(x[i:i+1,:])[0]
ys_i = list(map(lambda z: output_quant.get_bin_str(z), y_i))
ysc_i = reduce(lambda a,b: a+b, ys_i[::-1])
expected = f"{int(ysc_i,2):0{int(total_output_bits)}b}"
assert(expected == result)
res_split = [result[i:i+output_bitwidth] for i in range(0, len(result), output_bitwidth)][::-1]
yv_i = torch.Tensor(list(map(lambda z: int(z, 2), res_split)))
y[i,:] = yv_i
Expand Down
19 changes: 15 additions & 4 deletions examples/cybersecurity/neq2lut.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,8 @@

from logicnets.nn import generate_truth_tables, \
lut_inference, \
module_list_to_verilog_module
module_list_to_verilog_module, \
load_histograms
from logicnets.synthesis import synthesize_and_get_resource_counts
from logicnets.util import proc_postsynth_file

Expand All @@ -34,6 +35,8 @@
"checkpoint": None,
"generate_bench": False,
"add_registers": False,
"histograms": None,
"freq_thresh": None,
"simulate_pre_synthesis_verilog": False,
"simulate_post_synthesis_verilog": False,
}
Expand Down Expand Up @@ -68,6 +71,10 @@
help="A location to store the log output of the training run and the output model (default: %(default)s)")
parser.add_argument('--checkpoint', type=str, required=True,
help="The checkpoint file which contains the model weights")
parser.add_argument('--histograms', type=str, default=None,
help="The checkpoint histograms of LUT usage (default: %(default)s)")
parser.add_argument('--freq-thresh', type=int, default=None,
help="Threshold to use to include this truth table into the model (default: %(default)s)")
parser.add_argument('--generate-bench', action='store_true', default=False,
help="Generate the truth table in BENCH format as well as verilog (default: %(default)s)")
parser.add_argument('--dump-io', action='store_true', default=False,
Expand Down Expand Up @@ -141,9 +148,12 @@
'test_accuracy': lut_accuracy}

torch.save(modelSave, options_cfg["log_dir"] + "/lut_based_model.pth")
if options_cfg["histograms"] is not None:
luts = torch.load(options_cfg["histograms"])
load_histograms(lut_model, luts)

print("Generating verilog in %s..." % (options_cfg["log_dir"]))
module_list_to_verilog_module(lut_model.module_list, "logicnet", options_cfg["log_dir"], generate_bench=options_cfg["generate_bench"], add_registers=options_cfg["add_registers"])
module_list_to_verilog_module(lut_model.module_list, "logicnet", options_cfg["log_dir"], generate_bench=options_cfg["generate_bench"], add_registers=options_cfg["add_registers"], freq_thresh=options_cfg["freq_thresh"])
print("Top level entity stored at: %s/logicnet.v ..." % (options_cfg["log_dir"]))

if args.dump_io:
Expand All @@ -154,9 +164,10 @@
else:
io_filename = None


if args.simulate_pre_synthesis_verilog:
print("Running inference simulation of Verilog-based model...")
lut_model.verilog_inference(options_cfg["log_dir"], "logicnet.v", logfile=io_filename, add_registers=options_cfg["add_registers"])
lut_model.verilog_inference(options_cfg["log_dir"], "logicnet.v", logfile=io_filename, add_registers=options_cfg["add_registers"], verify=options_cfg["freq_thresh"] is None or options_cfg["freq_thresh"] == 0)
verilog_accuracy = test(lut_model, test_loader, cuda=False)
print("Verilog-Based Model accuracy: %f" % (verilog_accuracy))

Expand All @@ -166,7 +177,7 @@
if args.simulate_post_synthesis_verilog:
print("Running post-synthesis inference simulation of Verilog-based model...")
proc_postsynth_file(options_cfg["log_dir"])
lut_model.verilog_inference(options_cfg["log_dir"]+"/post_synth", "logicnet_post_synth.v", io_filename, add_registers=options_cfg["add_registers"])
lut_model.verilog_inference(options_cfg["log_dir"]+"/post_synth", "logicnet_post_synth.v", io_filename, add_registers=options_cfg["add_registers"], verify=options_cfg["freq_thresh"] is None or options_cfg["freq_thresh"] == 0)
post_synth_accuracy = test(lut_model, test_loader, cuda=False)
print("Post-synthesis Verilog-Based Model accuracy: %f" % (post_synth_accuracy))

12 changes: 12 additions & 0 deletions examples/cybersecurity/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,8 @@
"learning_rate": 1e-1,
"seed": 25,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-s-comp": {
"hidden_layers": [49, 7],
Expand All @@ -59,6 +61,8 @@
"learning_rate": 1e-1,
"seed": 81,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-m": {
"hidden_layers": [593, 256, 128, 128],
Expand All @@ -74,6 +78,8 @@
"learning_rate": 1e-1,
"seed": 20,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-m-comp": {
"hidden_layers": [593, 256, 49, 7],
Expand All @@ -89,6 +95,8 @@
"learning_rate": 1e-1,
"seed": 40,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-l": {
"hidden_layers": [593, 100, 100, 100],
Expand All @@ -104,6 +112,8 @@
"learning_rate": 1e-1,
"seed": 2,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
"nid-l-comp": {
"hidden_layers": [593, 100, 25, 5],
Expand All @@ -119,6 +129,8 @@
"learning_rate": 1e-1,
"seed": 83,
"checkpoint": None,
"histograms": None,
"freq_thresh": None,
},
}

Expand Down
2 changes: 1 addition & 1 deletion examples/jet_substructure/config/yaml_IP_OP_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,5 +45,5 @@ L1Reg: 0.0001
NormalizeInputs: 1
InputType: Dense
ApplyPca: false
PcaDimensions: 10
PcaDimensions: 12

Loading