Releases: jchambers/pushy
v0.13.0 - Improvements everywhere!
This release contains a wide assortment of improvements to Pushy.
- Added an entirely separate benchmark server to make sure that our benchmarks are measuring client performance and not secretly measuring the performance of the server instead (#585)
- Added support for the
apns-id
header (#577) - We no longer treat
InternalServerError
responses from the APNs server as write exceptions, and instead treat them as rejections (#576) - Changed the default expiration time for push notifications from "immediate" to 24 hours (#593)
- Spread connections across all available APNs servers by using round-robin DNS resolution (#594)
- Added a separate metrics listener using the Micrometer application monitoring facade (#597)
- Perform protocol negotiation directly and drop ALPN entirely (#598)
- Dropped Apache Commons Codec as a dependency (#599)
- Improved exception handling and reporting when something goes wrong while opening a new connection (#602)
- Updated to Netty 4.1.23 (#596)
Please note that because we no longer use ALPN for protocol negotiation, use of alpn-agent
or a native SSL provider is no longer required, but users can still expect significant performance gains from a native SSL provider.
For a complete list of changes in this release, please see the v0.13.0 milestone.
v0.12.1 - Fixed a hang after a failed connection attempt
This release primarily fixes a bug (#583) where clients could "hang" after a connection attempt failed. It also updates to the latest version of Netty (4.1.22).
v0.12.0 - More flexible mock server
This release is focused on enhancing Pushy's mock APNs server. We've made the server more flexible, polished the docs, and generally made it more of a first-class tool for folks who want to write integration tests and benchmarks. The two most notable new features are:
- A
PushNotificationHandler
interface that allows callers to implement their own logic for accepting or rejecting push notifications (for example, to simulate certain edge cases and failure modes) - A
MockApnsServerListener
interface that gets notified when a mock server accepts or rejects push notifications
We think these changes will be a big help to anybody who's trying to use a mock server for testing, and we look forward to your feedback!
Additional changes and bug fixes include:
- Callers can now attach HTTP/2 frame loggers to clients for debugging purposes (#536)
- Introduced a
PushNotificationFuture
interface, allows callers to get push notifications from futures even if those futures fail (and also simplifies some really long generic chains) (#535) - Fixed a bug where we'd try to keep re-opening connections to servers that had shut down (#571)
- Updated to Netty 4.1.19 (#575)
For a complete list of changes, please see the v0.12 milestone.
v0.11.3 - Better handling of upstream server errors
This release addresses a few oversights since moving away from caller-controlled connections to a connection-pool model. Most importantly:
ApnsClients
will now automatically replace connections that report anInternalServerError
, which in practice often means the connection is permanently unusable- Updated some documentation that still referred to explicitly connecting/disconnecting clients
Additionally, we updated to the latest version of Netty to pick up the latest HTTP/2 bug fixes and performance enhancements.
You may be wondering what happened to v0.11.2. The simple answer is that I made a typo when tagging the release and this should be 0.11.2. Lesson learned: don't operate heavy machinery or perform releases while taking medication for a cold.
For a complete list of changes, please see the v0.11.3 milestone.
v0.11.1 - Bug fixes
This release fixes a minor bug with connection pooling and a rather embarrassing error:
- Fixed a problem where notifications sent while a connection pool is closing might not be resolved.
- Re-restored reference-counted SSL handlers; this was intended to be in v0.11.0, but human error (we merged the reference-counted SSL handler branch into the wrong target branch) got in the way.
For a complete list of changes, please see the v0.11.1 milestone.
v0.11.0 - Connection pooling, simpler reconnection mechanics
This release represents a significant architectural shift. ApnsClient
instances now maintain their own internal connection pools, connect on demand, and will manage all of the reconnection timing on their own. This should make life much simpler for most users, and should open the door to performance gains for industrial-scale users who were previously constrained by running clients on a single thread. This should also fix a host of reported bugs around reconnection and resource usage.
Other changes include:
- Updated to Netty 4.1.14 for the latest bug fixes and performance improvements (#510)
- Restored reference-counted SSL providers, which should resolve issues around direct memory usage and reclamation (#515)
- Added support for building MDM payloads (#516)
For a complete list of changes, please see the v0.11 milestone.
v0.10.2 - Bug fixes
This release fixes a couple more bugs:
- Fixed an issue where we had built a new frame listener, but weren't actually attaching it to an HTTP/2 connection, which could lead to dropped notifications when authentication tokens expired (#496)
- We now trigger
PING
frames when the read "side" of a connection goes idle (as opposed to both the read AND write sides being idle) to catch cases where we're happily sending data to a server that isn't listening (#493)
For a complete list of changes, please see the v0.10.2 milestone.
v0.10.1 - Bug fixes
This release includes a number of significant bug fixes. The most notable changes:
- Fixed a bug where some notifications could "hang" without a response if the underlying channel were closed unexpectedly
- Fixed a bug where some notifications could "hang" without a response when authentication tokens expired
- Temporarily rolled back to the non-reference-counted native SSL provider to resolve some memory leaks
For a complete list of changes, please see the v0.10.1 milestone.
v0.10 - Moving things around
This update includes a few minor features and a whole lot of housekeeping. First, the new things:
- Updated to Netty 4.11 and added support for the KQueue transport
- Used the reference-counted native SSL provider where possible; this should generally improve recovery of direct memory
- Made HTTP/2
PING
intervals user-selectable
As for the housekeeping, many of these changes are breaking API changes and will require intervention when upgrading. Major changes include:
- Moved everything from the
com.relayrides
tocom.turo
package to reflect RelayRides' name change to Turo. The group ID for our published artifacts has also changed fromcom.relayrides
tocom.turo
. Users will need to change the group ID for dependency declarations and change package names in code. - Revised our model for authentication tokens; due to a prior misunderstanding of APNs protocol design, we had included functionality to add an arbitrary number of keys/teams/topics to a single token-based client. It turns out that APNs only supports one team per connection, and so we've changed the API to reflect those limitations. Users will need to specify a single key when building a client, and will need to use multiple clients to cover multiple teams' topics.
- Removed connection-wide write timeouts, which were causing more problems than they were solving; users will need to simply remove calls to
setWriteTimeout
methods.
We apologize for the unusually high number of significant breaking changes, but believe these changes position us to keep things moving smoothly in the future. Thanks for your patience!
For a complete list of changes, please see the v0.10 milestone.
v0.9.3 - Bug fixes and a big dependency update
This release updates Pushy's Netty dependency. Importantly, we now depend on netty-tcnative-boringssl-static
by default. This means that Pushy should just work out of the box for nearly all users, and the setup instructions are now much more straightforward.
This release also fixes an embarrassing NullPointerException
that could happen when constructing push notification payloads.