You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe your Request
Right now, the implementation of PrintNanny Vision is embedded into the PrintNanny OS system image. PrintNanny OS bundles the whole webrtc-based video streaming stack, camera drivers, and vision/detection applications (gstreamer pipelines).
We want to separate the vision components, so these can exist as a stand-alone SDK for OEMs looking to integrate PrintNanny into their existing software stack.
Community Edition
tl;dr:: Connect PrintNanny up to any camera system using an open-source model. demo: Included in PrintNanny OS licensing: AGPL
Please take a look at inference step 4 below.
OEM Edition
tl;dr:: Train a PrintNanny model customized for YOUR 3D printer hardware. demo: TBD licensing: Commercial
Plug PrintNanny into your existing camera system. The bare-bones interfaces needed to collect data, train, deploy:
1. Data collection
Define an Arrow schema for raw bayer sensor data (so we're agnostic to encoding stack), temperature sensor data, and filament flow rate sensors.
Collect one sample frame and histogram of temperature readings per Z-axis movement.
importprintnanny_vision# Configure your API keyprintnanny_vision.init(api_key="demo")
# Provide a name and schema for your datasetSCHEMA="/path/to/arrow/schema"DATASET_NAME="2023-05-08__Printer1__ModelFilename"# Collect data samples until someone runs control+c to interrupt this script.my_dataset=printnanny_vision.Dataset(schema=SCHEMA, name=DATASET_NAME)
try:
print("PrintNanny is collecting samples for dataset {DATASET_NAME}. Press control+c to interrupt and upload dataset.")
my_dataset.run_collector()
exceptKeyboardInterrupt:
print(f"PrintNanny is uploading {DATASET_NAME}. This could take a while, you might want to grab a coffee☕")
# Upload dataset, and print upload progress to terminalmy_dataset.upload(progress=True)
print(f"PrintNanny finished uploading {DATASET_NAME}! You can view it at: {my_dataset.url}")
2. Labeling
Bounding box defective areas
Paint (segment) defective areas
TBD. I use a fork of VoTT for my labeling infrastructure, with a guidance model to speed up manual labeling.
We have the option of partnering with a data labeling service here.
3. Training
EffidientDet backbone
BiFPN allows us to start with image data, then add additional feature extractor networks for temperature/flow
importprintnanny_visionDATASET_NAME="2023-05-08__Printer1__ModelFilename"# Submit training job via Google Cloud Platform AutoML platform (get a quick working prototype for ~$200, minimum 4,000 samples)# See this blog post for an example: https://medium.com/towards-data-science/soft-launching-an-ai-ml-product-as-a-solo-founder-87ee81bbe6f6printnanny_vision.train(dataset_name=DATASET_NAME, timeout="6h", backend="gcp-automl", model_name="2023-05-08_AutoML")
# Run a local EfficientDet training job, incorporating flow rate and temperature dataprintnanny_vision.train(dataset_name=DATASET_NAME, timeout="6h", backend="printnanny-efficientdet", model_name="2023-05-08-efficientdet")
4. Inference
1 inference pass per Z-axis layer
Online (cloud) inference
Offline (air-gapped) inference remains available in PrintNanny OS as a reference implementation, and we'll work with vendors directly where air-gapped operations are p0.
importprintnanny_visionJOB_NAME="KaplanTurbineV2.stl"MODEL_NAME="2023-05-08__EfficidentDet"CAMERA_ENDPOINT= "http://localhost:8080/snapshot`
NOTIFICATION_WEBHOOK="http://localhost:8080/notifications"# On a z-axis height change, call printnanny_vision.monitor()printnanny_vision.monitor(camera=CAMERA_ENDPOINT, model_name=MODEL_NAME, save_results=True, job_name=JOB_NAME, webhook=NOTIFICATION_WEBHOOK)
5. Feedback
Build a data frame where 1 row is original input data + inference pass
Notification webhook for true positives ☝️ Configured in printnanny_vision.monitor() call.
This gives us everything we need to train and deploy a pilot model.
The text was updated successfully, but these errors were encountered:
Describe your Request
Right now, the implementation of PrintNanny Vision is embedded into the PrintNanny OS system image. PrintNanny OS bundles the whole webrtc-based video streaming stack, camera drivers, and vision/detection applications (gstreamer pipelines).
We want to separate the vision components, so these can exist as a stand-alone SDK for OEMs looking to integrate PrintNanny into their existing software stack.
Community Edition
tl;dr:: Connect PrintNanny up to any camera system using an open-source model.
demo: Included in PrintNanny OS
licensing: AGPL
Please take a look at inference step 4 below.
OEM Edition
tl;dr:: Train a PrintNanny model customized for YOUR 3D printer hardware.
demo: TBD
licensing: Commercial
Plug PrintNanny into your existing camera system. The bare-bones interfaces needed to collect data, train, deploy:
1. Data collection
Define an Arrow schema for raw bayer sensor data (so we're agnostic to encoding stack), temperature sensor data, and filament flow rate sensors.
Collect one sample frame and histogram of temperature readings per Z-axis movement.
2. Labeling
TBD. I use a fork of VoTT for my labeling infrastructure, with a guidance model to speed up manual labeling.
We have the option of partnering with a data labeling service here.
3. Training
For a first pass (without temperature/flow rate data), we can use any commodity vision AutoML product. Here's an example of the results achieved with Google Cloud AutoML Vision, for example.
4. Inference
5. Feedback
printnanny_vision.monitor()
call.This gives us everything we need to train and deploy a pilot model.
The text was updated successfully, but these errors were encountered: