Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New iOS backend, build system and desktop updates. #77

Merged
merged 28 commits into from
Aug 31, 2023

Conversation

dasisdormax
Copy link
Contributor

@dasisdormax dasisdormax commented Aug 13, 2023

Hello everyone,

This draft PR includes larger changes to add video support into our game.

Original changes:

  • Add an iOS backend using RoboVM
  • Upgrade to gradle 7 to support building with Java 17
  • Update dependencies to match current stable libGDX
  • Add support for the macOS arm64 target and cross-compiling from macOS to Linux and Windows (not tested yet)
  • Upgrade desktop builds to use FFmpeg 5.1.3, as I had troubles building the existing one. This also adds support for AV1 and Opus decoders.

Added later:

  • Update github actions
  • Add more usage instructions and hints to README
  • Add VideoActor for easier use

I focused on the features and platforms that are needed in our game, so GWT and Windows aren't as thoroughly tested. (edited)

I understand that this is quite a lot to review as one, but I could not separate the code from the build system changes.

Still, please feel free to test the current state and leave feedback, questions or suggestions.

README.md Outdated Show resolved Hide resolved
This removes support for 32-bit Linux x86
README.md Outdated Show resolved Hide resolved
@dasisdormax
Copy link
Contributor Author

A small update on the progress:

In the last days I focused on getting Linux builds to work, and on building and testing the included test projects. I also updated the github actions to work with the Gradle changes, but I couldn't test the changes to snapshot and release publishing.

Known Issues / Things that I'll still work on:

  • On desktop, there are still audio and visual glitches when starting the video.
  • On iOS, setLooping is not implemented yet.
  • More tests on Android. I feel like my changes are more hacky than they need to be.
  • I'll try out Frosty-J's suggestion about preloading on GWT, but no guarantees about that.

As I won't have much time the next weeks, I'll skip the preload system for now and focus on fixing bugs and making the current version more stable. Once this is done I'll mark this PR as ready for review. I still think the build and documentation updates and the iOS backend benefit the library users and potential new contributors. I don't want to delay this until the preload system is fully done.

On desktop, this removes the custom I/O and fixes issues with uninitialized audio and video buffers.
@SimonIT SimonIT linked an issue Aug 24, 2023 that may be closed by this pull request
@dasisdormax dasisdormax marked this pull request as ready for review August 26, 2023 14:57
@dasisdormax dasisdormax changed the title WIP: New iOS backend, build system and desktop updates. New iOS backend, build system and desktop updates. Aug 26, 2023
@dasisdormax
Copy link
Contributor Author

Here are some more comments about the changes to the build system:

Gradle and dependency versions

Gradle and the dependencies are brought up to the same versions as current libGDX:

  • Gradle 6.1 -> 7.5
  • Android SDK 30 -> 32
  • Android Gradle Plugin 3.6 -> 7.2

This required replacing the 'maven' plugin with 'maven-publish'.

Github Actions

The actions were updated to work with the updated gradle versions. For this, the java and gradle setup were borrowed from libGDX's actions.

Also, the natives builds were split up into Linux, Mac and Windows. These jobs upload to a common 'desktop-natives' artifact, which is consumed by the final 'gradle-build' job. The 'desktop-natives' artifact is also comfortable to download and use for local building and testing.

Desktop natives building

(gdx-video-desktop/build.gradle)

The compile steps for the desktop natives remain the same: First, a custom build of FFmpeg is compiled. Then, this build is combined with own logic via gdx-jnigen to produce a shared library with everything included.

The configuration of the compile tasks works slightly different now. A registerBuild function is configured that

  • tries to detect if we're cross-compiling and
  • combines common compile flags with platform-specific ones.

The flags were updated to work with the updated FFmpeg version and additional support for AV1/Opus

@dasisdormax
Copy link
Contributor Author

dasisdormax commented Aug 26, 2023

Here are some more comments about the updated desktop implementation:

The FFmpeg submodule was updated to version 5.1.3. This required significant changes to the VideoDecoder class, but also allowed to make some things simpler. The FFmpeg initialization and custom I/O methods are not required anymore.

An overview what the code does:

  • When calling play, the file is sent to an AVFormatContext, which takes care of parsing the file and extracting individual packets. First, we get information and metadata about the audio and video streams. We set up the AVCodecContext and create buffers for the audio samples and video frames. (Functions VideoDecoder::loadFile and VideoDecoder::loadContainer)
  • A separate video decoding thread is started. It retrieves video packets from the format context, sends them to the codec context for decoding, and reads the decoded frames. The decoded frames are then converted to RGB texture data and stored to the front of rgbFrames ring buffer. Once the buffer is full, the thread goes to sleep. (Function VideoDecoder::run)
  • When update is called, when it is time to display these frames, they are pulled out from the back of the ring buffer. Then, we wake up the decoding thread to have more frames decoded. (Function VideoDecoder::nextVideoFrame)
  • Audio is decoded on demand, when requested by OpenAL. The decoding works the same, the output is then converted to the proper sample rate and format for OpenAL. As this runs on a different thread to video decoding, access to the format context is locked with the packetMutex. (Functions VideoDecoder::updateAudioBuffers, VideoDecoder::decodeAudio and VideoDecoder::readPacket).

@dasisdormax
Copy link
Contributor Author

On the iOS implementation:

I feel it is quite straight forward overall.
The provided file is opened as AVAsset, which is queried for the track metadata and passed to an AVPlayerItem. As audio output is handled automatically, we only add a videoOutput to the AVPlayerItem request the video frames as RGB data. The AVPlayerItem is added to an AVPlayer for playback control.

When calling update, we check the video output for new frames and create a libGDX Texture from it.

@SimonIT SimonIT linked an issue Aug 27, 2023 that may be closed by this pull request
public void draw (Batch batch, float parentAlpha) {
Texture texture = player.getTexture();
if (texture == null) return;
texture.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to set a TextureFilter? Is it needed to be set on every draw?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I felt the steps were too visible from the default Nearest filter, when resizing the window from the examples. But to be honest, what looks best is subjective.

The filter stays on the texture once it is set, so there is no need to set it multiple times. However, setting it just once doesn't always work, as the texture object may change in some cases. So this implementation is simple and reliable rather than optimal.

Looking at it a second time however, I don't like how this doesn't allow users of VideoActor to set another filter if it fits their game better.

I could make the filter configurable in VideoActor and make the implementation more optimal. Or just remove it from VideoActor for now and let the user handle all of it. Which would you prefer for now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think removing it is the best option for now. Another solution would be to set the filter as an attribute for the player, so when a new texture is created, it can be applied

@SimonIT SimonIT merged commit 13af998 into libgdx:master Aug 31, 2023
4 checks passed
@SimonIT
Copy link
Member

SimonIT commented Aug 31, 2023

Thanks for the massive work!

@Frosty-J
Copy link
Contributor

On desktop, AV1 seems not to work. Any thoughts? Only tried on Windows and encoding with av1_nvenc. VP8 and VP9 are fine.

[av1 @ 0000017ce51f5800] Your platform doesn't suppport hardware accelerated AV1 decoding.
[av1 @ 0000017ce51f5800] Failed to get pixel format.
[av1 @ 0000017ce51f5800] Missing Sequence Header.
[av1 @ 0000017ce51f5800] Missing Sequence Header.
[...]
[matroska,webm @ 0000017ce51f1100] Could not find codec parameters for stream 0 (Video: av1 (Main), none(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9]): unspecified pixel format
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
[VideoPlayer::loadFile] video stream found [index=0]
[VideoPlayer::loadFile] audio stream found [index=1]
[VideoPlayer::loadFile] Loading audio resampler ...
[VideoPlayer::loadFile] Loading video scaler ...
Assertion desc failed at src/libswscale/swscale_internal.h:725

After reading the error, I tried encoding with -pix_fmt yuv420p, which unsurprisingly made no difference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants