Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU high io wait #110

Open
9000h opened this issue Mar 16, 2018 · 8 comments
Open

CPU high io wait #110

9000h opened this issue Mar 16, 2018 · 8 comments

Comments

@9000h
Copy link
Contributor

9000h commented Mar 16, 2018

as I cannot reopen the closed one I did create a new issue
there is no disk io btw.
the vaapidevice did create high CPU io wait (the old softhddevice did the same)
screenshot from 2018-03-16 22-28-23

when I use xinelibout the CPU io wait is very low as expected
vdr -P"xineliboutput --local=sxfe --video=vaapi --audio=alsa --remote=none" -Psatip
screenshot from 2018-03-16 22-18-39

@Space2Man
Copy link

Hi,

I checked with perf and saw the following:

    17.25%     0.05%  vdr              [kernel.vmlinux]             [k] entry_SYSCALL_64
            |          
             --17.20%--entry_SYSCALL_64
                       do_syscall_64
                       |          
                       |--13.63%--sys_ioctl
                       |          |          
                       |          |--11.94%--do_vfs_ioctl
                       |          |          |          
                       |          |           --11.53%--drm_ioctl
                       |          |                     |          
                       |          |                      --10.66%--drm_ioctl_kernel
                       |          |                                |          
                       |          |                                |--7.66%--i915_gem_execbuffer2
                       |          |                                |          |          
                       |          |                                |           --7.46%--i915_gem_do_execbuffer

Seems like it accesses the intel card via vfs --> therfore it's listed as I/O.

Can you do the same check with both vaapidevice and xineliboutput? I did the following to get the above graph:

# perf record -g -a sleep 10
[ perf record: Woken up 18 times to write data ]
[ perf record: Captured and wrote 4.592 MB perf.data (34714 samples) ]
# perf report

I am not sure if this is really an issue or if it is just shows that vaapidevice accesses the gfx card via vfs call.

@rofafor
Copy link
Contributor

rofafor commented Mar 17, 2018

If you're comparing vaapidevice to another implementation like xineliboutput, please, make sure that both of them have enabled exactly the same VA-API filters. Otherwise, you'll end up comparing apples to oranges.

@9000h
Copy link
Contributor Author

9000h commented Mar 17, 2018

sure on vaapidevice the filters are off
also mpv did not show high cpu io wait with hwdec=vaapi vo=vaapi and playing back HD/UHD recordings

@jupe76
Copy link

jupe76 commented Mar 17, 2018

Hi,
I've also been observing this weirdness for some time now.
Defintily the kernel is involved is this.
With softhddevice vaapi with the last lts kernel 4.9 I always had a very low load of 0.4 or so.
Only using a newer kernel from 4.10 or 4.11 on shifts the load to where it is now (2.x ...)

@9000h
Copy link
Contributor Author

9000h commented Mar 17, 2018

I can also confirm on older kernels 4.8 and also older ffmpeg,libav etc... the iowait is normal, but why is mpv and xineliboutput not also affected?

@Quantomax
Copy link

Quantomax commented Mar 18, 2018

Kernel 4.14.11 and newer may have Spectre / Meltdown Fixes enabled - maybe this is the game changer?

@9000h
Copy link
Contributor Author

9000h commented Mar 18, 2018

I don't think so as mpv and xinelibout do not show a similar behavior

@9000h
Copy link
Contributor Author

9000h commented Mar 18, 2018

all the thread locking around the ffmpeg api calls are probably not needed anymore as the newer libs have now his own mechanism for locking

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants