-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Temperature dependant charge curve #1844
Comments
I think this is a good idea but I don't know how the charge curve is impacted with temperature, do you have data on this? Predbat can calculate the charge curve dynamically so you can set it to auto but it will clearly by from last night rather than right now. |
There is some discussion on this thread on the GE forum. https://community-beta.givenergy.cloud/t/battery-charging-rate/2219/9 The TL;DR is that GE say there standard spec for charge rates above 20C: 100%, 10-20: 60%, 0-10: 50%. On the face of it a first release of this appears a little simpler than the current charge rate calculation. Would be a sort of expert mode but in apps.yaml as if a users inverter control service can report temperature they would need to either know from a specification or observe the charge changes then set the limits. |
Does get complicated, because as you say this is often tweaked depending on firmware. Say 3007, there is no temperature impact, and on 3019 (in beta), you will get 100% rate all the way down to 10 degrees. Then there is the number of batteries on an inverter (and it's discharge rate). For example, 2x9.5s may not throttle (as say 30% of the total C-raiting still exceeds the capacity of the inverter, so no derate is visible) when 1x9.5 would. Would you also need to consider a prediction in the plan of when the temperature would reach the next threshold. I can see value for those that see it, but would need some careful though how to implement. |
I’m not expecting predbat to auto discover all of this. What I really want to avoid is predbat executing a charge period and only seeing that something is wrong when the SOC doesn’t rise as planned. Any forewarning must be positive for the plan. I see it as something rather course and I would happily use an automation to change the charge rate if it wasn’t inside apps.yaml. I imagine it as something like the manual charge curve currently where the user defines the temperature and the percentage. If you don’t define nothing happens if you do it does. |
You can change this via an automation by using the predbat manual API https://springfall2008.github.io/batpred/manual-api/ I had inverter limit charge and discharge added to the list of controllable entities for just this reason. However, I think there is still value in being able to configure the custom charge curve in Predbat and this being an advanced feature as discussed above because it does vary by inverter Gen and battery combination. Predbat could then apply current battery cell temperature to find the correct charge rate for the plan. |
Ah. Ok I will look at the manual API, thank you for highlighting it. I haven’t ventured beyond expert mode so far. The battery temperature will always be a lagging indicator unfortunately but I do feel predbat knowing and acting on the current charge limit is better than not knowing. |
Thanks @gcoan. I think I have an automation working now to call the manual API. I have triggered it manually a few times, just need to wait for real events to occur now to fully verify. |
New release v8.11.0 will contain a first version of this feature |
Thank you Trefor. Will make changes tomorrow and get back to you. |
That table isn't right, I'd say indicative and based on AC charge rates. From my observations I'm closer to 0.3C on a 9.5kW with a G3 Hybrid. That is what I've set in apps.yaml. @gcoan I would say consistent with what you see for your 9.5. But interesting that your 5.2 is at around 0.37 using your reported 1.8kW charge rate as a DC value. @springfall2008 I can see temperature being pulled in correctly in the logs. Is there any other evidence to look for or do I need to either wait for or force a charge? |
I was checking the documentation on this after a couple of days of use. There is a potential inconsistency between the documentation and the apps.yaml template text. Main body note says: "...gaps in the curve above 20 will used 20 degrees, gaps below 0 will use 0 degrees. Do not leave gaps in the curve between 20 and 0." apps.yaml template says: values unspecified will be assumed to be 1.0 hence rate is capped by max charge rate |
I suspect the documentation is correct and the apps.yaml is wrong, but they ought to be consistent I agree ! I am still struggling to work out how to set the temperature curve up for me Made harder by historic data including the effects of low power charging and the sensor history being purged and just long term stats mean less granularity of data... From what I can see I am pretty confident that at least in the 10-20 and 20+ ranges both my 9.5 and my 5.2 are operating as per the table APART FROM the max rate charge which is only ever 2425W not 2600W that the inverter is capable of. The difference most probably due to measuring charging in AC but actual reported charge rate is in DC after conversion losses. The problem is how to model the temperature curve and the input_number.predbat_battery_rate_max_scaling correctly. If I set input_number.predbat_battery_rate_max_scaling to 1.0 and just rely on the temperature curve then any charging that is not temperature limited (i.e. 5.2 > 20 degrees and 9.5 > 10 degrees) will incorrectly assume a 2600W charge rate rather than the 2425W that is achieved by the inverters. If I leave input_number.predbat_battery_rate_max_scaling set to 0.92 to reflect the reduced max rate charging then it risks under-estimating the temperature limiting charging. I think what I need the logic to be:
@springfall2008 does this match how the code works? |
A few things I have worked out playing with this. Firstly the table from the GE forum is almost certainly AC rates. We know GE inverters report the AC charge rate but we actually see the DC rate in the GE app, GivTCP and predbat. battery_rate_max_scaling is applied after the temperature curve is applied. I had a poke around the pull request and you can see it in the code as a multiplier applied after a function called get_charge_rate_curve and is used to set charge_rate_now. This is consistent with the longer standing charge power curve which is also an argument of get_charge_rate_curve. If back working to get to your curve we need to remember the AC/DC point for GE inverters, So reverse any scaling applied in battery_rate_max_scaling when doing the sums. I found the GE cloud das I think the logic in predbat is sound, that being everything is calculated from inverter reported rates. I'm most intrigued by the knee in the overnight charge I'm now seeing since this release. It is present with and without a temperature_charge_curve set in apps.yaml when low_power_charge is true. Turning low_power_charge false and using a temperature_charge_curve does not result in the knee. Which leads me to think that the adjusts made to the low power mode calculations to accommodate temperature_charge_curve have impacted the output in an unexpected way. |
Interesting investigation, thanks. I have today upgraded to 8.11.0 so will see what difference it makes. I too thought there was inconsistencies in the table with AC vs DC rates and what's used where.
For clarity, I don't think battery_rate_max_scaling is applied to the charge rate used by the inverter (i.e. number.givtcp_xxxx_battery_charge_rate), my experience is that its only used in Predbat for calculating what the effective charge rate is. What I see is the inverter charge rate (for a full rate charge) being set to 2600W, but Predbat knows in the plan calculation that it will actually only achieve 2450W or whatever it is. If you want to limit the inverter charge rate from the inverter maximum you need to set inverter_limit_charge in apps.yaml.
So if the table says C=0.33 for 10-20 degrees and battery_rate_max_scaling is 0.92 then is the conclusion that we should put a value of 0.33/0.92=0.36 in the apps.yaml temperature curve? And for C=0.25 for 2-10 degrees its 0.25/0.92=0.27 ? |
You are right about the use of battery_rate_max_scaling. I know from my logs and GivTCP logs that predbat sets my inverter to 3600, it’s AC max. I didn’t do the maths that way. I did observed limited charge rate/battery_rate_max_scaling to get the AC limited rate. Then used that divided by battery capacity to get the C rate. |
I think it comes out to much the same number Charging on my 5.2 earlier this evening observed at 1720W (although the battery temp had just risen from 20 to 21 so in theory shouldn't have been limited) 1720/0.92=1869 My calculation taking the figure given in the table as being correct is 0.33/0.92=0.359 Yours has the advantage of working out from evidence the actual AC limited rate rather than trusting the table. Looking at evidence shows there is variation in charge rates. Overnight when the battery temperature was 18 degree rising through to 21 degrees the 5.2 battery was charging at between 1780 and 1790W 1785/0.92=1940 All useful discussion and I can expand the documentation with more details on how to calculate your own temperature charging curve. Thanks ! |
I've gone deep down this rabbit hole so be prepared. I also noticed the charge rate subtly rising. That tells me that a true Amps limit is being applied by the BMS. Which from my little bit of research into C-rating is the first principles way of calculating and using it. From there I looked at the data sheet for the batteries. I think further variability between units come from the stated max charge current of each unit. For the 5.2 with a stated amp capacity of 102Ah and a max charge current of 50A. 50A is basically 0.5C as we would expect. So 0.33C is 33.33A at the nominal voltage of 51.2V=1.7kw (DC, I think), matching the table. So it works for this battery. The 9.5 whilst having an amp capacity of 186Ah, has a max charge current of 80A. So that is already 0.43C. On the same logic then for the 9.5 then you get to 53.33A. My observed charge rate and observed voltage works out to around 50A. Which gives you 0.26C. In summary I think there is a lot of variability between battery and inverter combinations and so as with many things the answer is; It depends. As heat is the enemy and heat comes from amps it is likely a judgement being made by GE about heat and heat shock. So setting a max charge current to manage that. What is does make me realise is that a big factor in the AIO having much higher battery charge and discharge rates is because the nominal battery voltage is higher at 307V so with a lower max charge current of 25A it can hit 7.5kW and presumably generate a lot less heat in the battery cells. We see similar in EVs moving to 800V architectures. Reduce heat, reduce losses, reduce size of HV cables and so weight. Higher voltage is generally superior. |
*Is your feature request related to a problem? Please describe.
First winter with battery and I am starting to see temperature dependent charge limits being applied in battery firmware affecting predbat’s planning. This is especially true for trying to maximise short slots.
Describe the solution you'd like
Where the inverter control service allows use a temperature reading and apply a charge curve based on the temperature. Similar to the current charge curve implementation but based on temperature not SOC.
Describe alternatives you've considered
Setting the charge limit in apps.yaml to the most often observed reduced rate. But this reduces charge rate permanently not just when the temperature is low.
Additional context
For GivEnergy consumer batteries it can be observed in the GE app or GE Cloud that the BMS temperature, which GivTCP reports, is being used to apply a percentage scale to the max charge rate.
The text was updated successfully, but these errors were encountered: