-
Notifications
You must be signed in to change notification settings - Fork 327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A better walkthrough of the training data collection process? #127
Comments
It’s late at night on the West Coast right now, so I’m just going to post a short comment for now. Very short answer: my data collection process really sucks (because it’s unitinuitive) and I’m currently refactoring it based on best practices from here: http://docs.donkeycar.com/guide/get_driving/. If you scroll down about half way on that page you’ll see a really slick mobile web UI that you can use to gather training data. The full refactoring will take another 5-6 week’s most likely. I highly recommend looking at the donkeycar repo in general. The long answer is sort of in the main readme of my repo, but if that’s insuifficent then a better answer will take me too long to type over my phone now. I’ll reply back to this issue with better instructions once I’ve finished he refactoring. |
Okay, I am now reading more into the code and the readme. Respond with a thumbs up, so I know I am going in the right direction.
|
Not really relevant here but adding it incase someone else stumbled on this problem, I was using the code here, to capture the frames and stream back to the server. Thanks a bunch for all your help @RyanZotti |
Could you please give me a clearer overview of how the training data was collected?
A brief run down of your tech stack for streaming and the Python files which were would greatly help.
I am currently using
raspivid -n -t 0 -rot 270 -w 960 -h 720 -fps 30 -b 6000000 -o - | gst-launch-1.0 -e -vvvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host=192.168.1.2 port=5000
to stream video
Then view it using gstreamer.
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false
I am also trying Hamuchiwa's Approach no luck yet. Which collection method would be better?
That seems to work fine but I don't know how I would pipe/send this to OpenCV?
An clearer explanation on your method would greatly help me.
The text was updated successfully, but these errors were encountered: