Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow fetching data even when using setWaitForConversion(false) #254

Open
bobemoe opened this issue Jun 27, 2024 · 4 comments
Open

Slow fetching data even when using setWaitForConversion(false) #254

bobemoe opened this issue Jun 27, 2024 · 4 comments

Comments

@bobemoe
Copy link

bobemoe commented Jun 27, 2024

I am reading 12+ sensors. I use setWaitForConversion(false) so requestTemperatures() returns instantly and as I understand it the conversions are done in the background. Then when I want the temps I use getTempC(address) which takes 13ms to return. This is much faster than waiting for conversion but still when reading 10 sensors this is 130ms which I cant afford to block for, and we are wanting to add more sensors...

I refactor my code to read only one 13ms block per loop witch spreads out the blocking rather than having it all at once but I would still like to investigate the reason for the delay. Is it purely the unavoidable transfer time of the converted data from sensor to processor? Or are there other ways to optimise this?

Since conversion has already been done at this point I assume lowering resolution will not have any impact so have not tested this. Is this correct?

By my (cough, ChatGPT) calculations:

The data transfer time for one DS18B20 temperature sensor over the OneWire bus can be summarized as follows:

Communication Speed: 16.3 kbps (kilobits per second)
Command Bits: 8 bits (1 byte) to send the "Read Scratchpad" command
Data Bits: 72 bits (9 bytes) to read the sensor's scratchpad

Total Bits Transferred

Total Bits per Sensor: 8 bits (command) + 72 bits (data) = 80 bits

Time Calculation

Time to Transfer 80 Bits:
Time = 80 bits / 16300 bits per second ≈ 0.0049 seconds
Time in Milliseconds: Approximately 4.9 milliseconds

This is the theoretical minimum time required to transfer the temperature data for one sensor, excluding any additional overhead from command initiation, library processing, or microcontroller handling.

So I'm guessing the additional 8ms are overheads and unavoidable? Or can we shave them off somehow?

Any tips or tricks or optimisations appreciated? Thanks

@RobTillaart
Copy link
Contributor

Interesting question,
Can you share your (stripped) code?

@RobTillaart
Copy link
Contributor

One thing that might be possible is to reduce the reading of the ScratchPad in the library.
For the temperature you only need the first two bytes.
image

@RobTillaart
Copy link
Contributor

RobTillaart commented Jun 27, 2024

bool DallasTemperature::readScratchPad(const uint8_t * deviceAddress, uint8_t * scratchPad)
should be extended with a parameter to define number of bytes.
Default it should fetch 9 bytes, but the first 2 should be sufficient.

Somewhere in the getTempC() call stack the above call will appear.

This would reduce IO and thus time needed.

Did a similar trick in my https://github.com/RobTillaart/DS18B20_RT library - using a flag DS18B20_CRC to disable the CRC check - however my library only supports one sensor per pin.

@RobTillaart
Copy link
Contributor

Measurements with my library with an UNO, timing for getTempC only

if fetching 9 bytes it takes ~28 millis, if fetching 2 bytes it takes ~24 millis.
so about ~4 millis to be gained when only fetching 2 bytes.

don't know how this gain ports to another processor.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants