Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Item edition becomes slow when there are lots of items. #2374

Open
axeldavy opened this issue Aug 3, 2024 · 8 comments
Open

Item edition becomes slow when there are lots of items. #2374

axeldavy opened this issue Aug 3, 2024 · 8 comments
Labels
state: pending not addressed yet type: bug bug

Comments

@axeldavy
Copy link

axeldavy commented Aug 3, 2024

Version of Dear PyGui

Version: 1.11.1
Operating System: Arch Linux

My Issue/Question

When there is a significant number of items created, editing a large number of items becomes increasingly slow.

To Reproduce

Here is a code to notice the issue:

import dearpygui.dearpygui as dpg
import time
import numpy as np

dpg.create_context()
dpg.create_viewport()
dpg.setup_dearpygui()

texture_data = np.ones([100, 100, 4], dtype=np.float32)
texture_data[:,:,:3] = np.random.randn(3)[np.newaxis, np.newaxis, :]
with dpg.texture_registry(tag="textures"):
    dpg.add_static_texture(100, 100, texture_data, tag="__demo_static_texture_1")

with dpg.window(tag="Primary Window"):
    with dpg.plot(tag='plot', width=-1, height=-1):
        dpg.add_plot_axis(dpg.mvXAxis)
        with dpg.plot_axis(dpg.mvYAxis):
            dpg.add_image_series("__demo_static_texture_1", [0, 0], [100, 100], tag="series1")

N = 0

def add_elements():
    global N
    t_1 = 1000 * time.time()
    tag = dpg.add_draw_layer(show=False, parent='plot')
    tags = []
    for i in range(1000):
        tags.append(dpg.draw_line((0, 0), (1, 1), parent=tag))
    t_2 = 1000 * time.time()
    for t in tags:
        dpg.configure_item(t, thickness=1.2)
    t_3 = 1000* time.time()
    print("Creation time: %d ms. Edition time: %d ms" %  (t_2-t_1, t_3-t_2))
    N += 1000

dpg.show_viewport()
dpg.set_primary_window("Primary Window", True)
t0 = 1000 * time.time()
i = 0
while(dpg.is_dearpygui_running()):
    t1 = 1000 * time.time()
    if i % 10 == 0:
        add_elements()
    elif i % 10 == 5:
        # Show frame time on a different frame than element creation
        # to prevent bias
        print("%d elements. Frame time: %d ms" % (N, t1-t0))
    dpg.render_dearpygui_frame()  
    t0 = t1
    i += 1
dpg.destroy_context()

In this code, the number of items is increased linearly. But all these items are hidden and encapsulated inside a draw layer. Thus the frame time is not affected much.
During item creation, the creation's speed of the items remains stable.

However item edition becomes very slow.

I encountered this bug because I wanted to edit the thickness of some custom drawing items in a plot in response to zoom and noticed it became quadratically slow with the number of elements drawn (because I need to edit thickness of more items, and because each edition is linearly slower). The proper solution for my thickness issue is to add a configuration option to not scale the thickness with the plot zoom level (DearPyGui does scale by ImPlot->Mx the thickness, and basically I'm inverting it). But regardless on my own individual issue, one can notice with the above example that pretty quickly it becomes impossible to edit item's configuration in real time.

On my computer the code will display:

Creation time: 15 ms. Edition time: 16 ms
1000 elements. Frame time: 24 ms
Creation time: 47 ms. Edition time: 26 ms
2000 elements. Frame time: 28 ms
Creation time: 38 ms. Edition time: 32 ms
3000 elements. Frame time: 13 ms
Creation time: 46 ms. Edition time: 46 ms
4000 elements. Frame time: 33 ms
[....]
Creation time: 35 ms. Edition time: 2449 ms
55000 elements. Frame time: 15 ms
Creation time: 35 ms. Edition time: 2457 ms
56000 elements. Frame time: 31 ms

As one can notice, it is already not real time anymore to edit the configuration of 1000 elements when there are only 2000 elements. As soon as there are 6000 elements it takes already more than 100 ms.

This item edition slowdown is also noticed when deleting item. Deleting a draw layer with many elements is magnitudes faster than deleting individually all elements.

Expected behavior

Likely the slowdown is due to the UUID search which seems to go linearly through the whole graph until the UUID is found.
The reason creation time is not affected much by the slowdown is due to the fact UUID search has a cache of 25 elements. But when editing elements or deleting them, this cache is not useful anymore.

Expected behaviour would be to have O(1) or O(log(n)) operations, using hashing or sorting.

Dear ImGui seems to have ImPool/ImGuid helpers to have log(n) operations.
However given DearPyGui shouldn't expect in practice to have 100K items, I think the simplest and fastest would be to simply use a hash table. For example have a table of size 1024, each element being a list of pairs (uuid, pointer), and elements inside at index i corresponding to elements with uuid % 1024 == i. To prevent allocating elements inside the table, the best it to use a linked list. Basically each item structure have a list field pointing to the next and previous element in the list. This enables to remove elements in the list for free, and prevents allocating specifically for the hash table. (see https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/util/list.h for an example of linked list).

@axeldavy axeldavy added state: pending not addressed yet type: bug bug labels Aug 3, 2024
@axeldavy
Copy link
Author

axeldavy commented Aug 3, 2024

Since std is used in this project std::unordered_map can be used. It is basically a hash table.

A simple patch to use it for GetItem and GetItemRef (axeldavy@d7125ca) drastically improves performance:

Creation time: 42 ms. Edition time: 6 ms
1329000 elements. Frame time: 19 ms

Now ideally DeleteItem should be improved as well, I guess we must find a fast way to get the parent of an item.

@v-ein
Copy link
Contributor

v-ein commented Aug 3, 2024

You're retracing my steps 🤣 and sometimes even going faster than me 👍. Like, a hashtable-based GetItem was one of my planned improvements but it needs to be done carefully, and I've been postponing it for months.

@v-ein
Copy link
Contributor

v-ein commented Aug 3, 2024

Please don't rush to fix all synchronization and refcount issues, wait until I publish my fixes 😂 no need for duplicate efforts ;).

@axeldavy
Copy link
Author

axeldavy commented Aug 3, 2024

Using std:unordered_map seems like the easiest way to get a hashtable-based GetItem. And as you can see in my log even with more than a million elements, it's several times faster than the old code with 1000 elements.

Also apparently it's easy to get the parent of an item, so DeleteItem comes naturally too (DeleteChild on the parent).

I'll try to wait for the synchronization and refcount. Do you have a public branch I could pull ?

EDIT: I also observe significant performance boost on the framerate.

@v-ein
Copy link
Contributor

v-ein commented Aug 3, 2024

Unfortunately I don't have a public branch because my local build contains components specific to my project, which I can't share. I typically cherry-pick commits into my fork https://github.com/v-ein/DearPyGui-fixes right before opening a PR, but this time I want to get #2275 merged first, then pull it to my repo because the new code might have similar sync issues, so I'd prefer to re-check it first.

@axeldavy
Copy link
Author

axeldavy commented Aug 5, 2024

In that I would appreciate to have some hints as to what is causing the current hangs.

Even If I do not release completly in order to remove the gil in the mvAppItem destructor, I get hangs.

I have a lot of hangs I some temporary workarounds would be helpful.

@v-ein
Copy link
Contributor

v-ein commented Aug 8, 2024

Even If I do not release completly in order to remove the gil in the mvAppItem destructor, I get hangs.

Not sure what the "do not release completely" means, do you mean you're not deleting items?

I have a lot of hangs

Like deadlocks in #2053? Exactly when do they happen in your app?

The problem with the current version of DPG is that you don't have to do anything special... just use DPG from a worker thread and you'll get a chance for a deadlock. I don't think there are any workarounds on the Python side of things; this has to be fixed in C++.

@v-ein
Copy link
Contributor

v-ein commented Aug 8, 2024

Just thought that you can try to increase sys.setswitchinterval and see if that makes things better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
state: pending not addressed yet type: bug bug
Projects
None yet
Development

No branches or pull requests

2 participants