-
Notifications
You must be signed in to change notification settings - Fork 207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
efficiency of network editing functions #762
Comments
by the way, I don't think that network skeletonization is the only use case that would be hit by this. Another good example is using the epanet API to build models from scratch GIS data sources. |
The potential bottlenecks might be the
Regarding the
The
One potential downside of this new function is that |
Very useful analysis as usual Lew. I don’t know this is a priority, but I think the approach you laid out sound great, especially for the addition functions.
James Uber, Ph.D.
Director, Business Incubation
Xylem Data Science
M: (513) 368-6303
From: Lew Rossman ***@***.***>
Date: Wednesday, November 22, 2023 at 3:03 PM
To: OpenWaterAnalytics/EPANET ***@***.***>
Cc: Uber, James - Xylem ***@***.***>, Author ***@***.***>
Subject: Re: [OpenWaterAnalytics/EPANET] efficiency of network editing functions (Issue #762)
The potential bottlenecks might be the EN_addnode, EN_deletenode, EN_addlink, and EN_deletelink functions. The EN_add... functions require a total reallocation of the Node or Link struct arrays every time just a single node or link is added to a project. One way to avoid this is to add a Capacity property to these structs. When a new node or link is added and the total number is below the Capacity then a reallocation is not needed. Once the number of nodes/links reaches Capacity then the latter is increased by some amount and a new reallocation is made for this new Capacity. This same strategy is currently used with Curve elements (see the declaration of Scurve in types.h and the resizecurve function in project.c.
EN_addnode also reallocates the NodeDemand, NodeQual, NodeHead, DemandFlow, and EmitterFlow arrays whenever a new node is added. These arrays are only used during an analysis run and can be reallocated just once when an analysis is begun. Likewise, EN_addlink reallocates LinkFlow, LinkSetting, and LinkStatus when a new link is added. These re-allocations can also be postponed until a new analysis is begun.
Regarding the EN_deletenode function, it requires shifting the indices of the Node array and the indices stored in the NodeHashtable down by one whenever a single node at position index is deleted. The code for doing this looks as follows:
for (i = index; i <= net->Nnodes - 1; i++)
{
net->Node[i] = net->Node[i + 1];
// ... update node's entry in the hash table
hashtable_update(net->NodeHashTable, net->Node[i].ID, i);
}
The hashtable_update function has to search the table for each node who's position is above index. It might be more efficient to move this function out of the loop and replace it with the following:
void hashtable_shiftdown(HashTable *ht, int index)
{
DataEntry *entry, *nextentry;
int i;
for (i = 0; i < HASHTABLEMAXSIZE; i++)
{
entry = ht[i];
while (entry != NULL)
{
if (entry->data > index) (entry->data)--;
entry = entry->next;
}
}
}
One potential downside of this new function is that HASHTABLEMAXSIZE is currently set to 12800 (with most of these entries set to NULL for small to moderate sized networks), so it remains to be seen if this approach is more efficient. If so then a similar replacement for the hashtable_update function called in EN_deletelink should be made.
—
Reply to this email directly, view it on GitHub<https://urldefense.com/v3/__https:/github.com/OpenWaterAnalytics/EPANET/issues/762*issuecomment-1823436779__;Iw!!OKzgfr8!ZQeZb4C_W2krxYejnpLdJBSWXZe8PbC0zq1bNElvQSwPg4Spp_v-jyABkT3s3SOYg8P0YCTW_eMSmYe2vG_Ta-oKInM$>, or unsubscribe<https://urldefense.com/v3/__https:/github.com/notifications/unsubscribe-auth/AASMACIFZTQJ6MLVIIYNPDLYFZLBDAVCNFSM6AAAAAA7WTLC4WVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMRTGQZTMNZXHE__;!!OKzgfr8!ZQeZb4C_W2krxYejnpLdJBSWXZe8PbC0zq1bNElvQSwPg4Spp_v-jyABkT3s3SOYg8P0YCTW_eMSmYe2vG_T68PpETA$>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
CONFIDENTIALITY NOTICE: This e-mail, including any attachments and/or linked documents, is intended for the sole use of the intended addressee and may contain information that is privileged, confidential, proprietary, or otherwise protected by law. Any unauthorized review, dissemination, distribution, or copying is prohibited. If you have received this communication in error, please contact the original sender immediately by reply email and destroy all copies of the original message and any attachments. Please note that any views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of Xylem Inc..
|
@jamesuber I pushed a new branch to |
I ran some performance tests on the |
Thanks @LRossman -- I will plan to run some network skeletonization use-case tests in order to get some results from a practical application to add to yours. Will report back. |
I've had the chance the put the network editing functions through their paces using a network skeletonization use-case for large network models. This uses epanet as the backend store of network knowledge, and uses a boost graph representation of the network to support the skeletonization process. As changes are made in the network links and nodes, those changes are sent to epanet through the model editing api functions.
So this is an application where you are doing a lot of model editing. Very unlike support for model editing features in the UI. I was surprised to find that over 95% of the cpu time was spent in epanet, cause I figured all the heavy lifts would be in the graph skeletonization algorithms.
Talking to @LRossman about this, he thinks that the time is probably being spent on memory re-allocation. So this ticket is asking whether those memory requests can be streamlined somehow.
The text was updated successfully, but these errors were encountered: