-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bernoulli Factory Algorithms #13
Comments
I have made numerous updates to this document recently, with more algorithms and discussion. Letting them know: @lmendo. |
Letting these people know: @63EA13D5 . |
Regarding the simulation of Euler's constant without floating-point arithmetic, I think I found a simple way to do that, as well as for other constants. Please see the attached paper (I just submitted it to arXiv, but it doesn't appear yet) |
Thank you for your response. In the meantime, I found a rational interval arithmetic described in Daumas, M., Lester, D., Muñoz, C., "Verified Real Number Calculations: A Library for Interval Arithmetic", arXiv:0708.3721 [cs.MS], 2007, which computes upper and lower bounds of common functions, and I implemented this arithmetic in Python. However, as I found, even this rational interval arithmetic doesn't always lend itself to simple Bernoulli factory algorithms (and due to Python's highly slow Fraction class, I had to optimize some parts of the code), which is why I have avoided involving it in the hopes that an even simpler algorithm is possible without resorting to this kind of arithmetic. Also, on your problem of "characteriz[ing] the minimum number of inputs that an arbitrary algorithm must use to transform 1/2-coins into a τ-coin", I am aware of the following paper by D. Kozen that relates to this: "Optimal Coin Flipping". By the results in that paper, for example, any algorithm of the kind relevant here will require at least 2 unbiased coin flips on average to simulate a coin of known bias τ, except when τ is a dyadic rational. |
Thanks. The paper you link seems very interesting; I'll take a look. If the lower bound is 2, then my algorithm is close to optimum. In my algorithm, and probably in others, computations can be reused if you are going to simulate the simulation several times (page 11 of my paper). This can increase speed. |
I have noticed one error in your paper: In (16), "(2j-1)2j(2j+1)" should read "2(j-1)(2(j-1)+1)(2(j-1)+2)". Otherwise, the algorithm would simulate the wrong probability. Also, for clarity, the "1/2" and "3/2" in the algorithm's pseudocode should be in parentheses. Also, in the meantime, I have another open question:
REFERENCE: K. Bringmann, F. Kuhn, et al., “Internal DLA: Efficient Simulation of a Physical Growth Model.” In: Proc. 41st International Colloquium on Automata, Languages, and Programming (ICALP'14), 2014. |
Thanks for the correction. In my code I am using equation (15), so I hadn't noticed the mistake. As for the other problems, I don't really know, sorry |
The requests and open questions for all my articles are now in a dedicated page: Requests and Open Questions. However, since this GitHub issue is about all aspects of Bernoulli factories, not just the ones mentioned in Requests and Open Questions, I will leave this issue open for a while. |
I want to draw attention to my Supplemental Notes for Bernoulli Factory algorithms: Especially the section "Approximation Schemes". It covers ways to build polynomial approximations** to the vast majority of functions for which the Bernoulli factory problem can be solved (also called factory functions). They include concave, convex, and twice differentiable functions. Now my goal is to find faster and more practical polynomial approximation schemes for these functions. Which is why I have the following open questions:
** See my Math Stack Exchange question for a formal statement of the kind of approximations that I mean. |
This was part of my previous comment. However, these polynomial approximation methods don't ensure a finite expected running time in general. I suspect that a running time with finite expectation isn't possible in general unless the residual probabilities formed by the polynomials are of order O(1/n^(2+epsilon)) for positive epsilon, which Holtz et al. ("New coins from old, smoothly", Constructive Approximation, 2011) proved is possible only if the function to be simulated is C2 continuous. I suspect this is so because sums of residual probabilities of the form O(1/n^2) don't converge, but such sums do converge when the order is O(1/n^(2+epsilon)). (By residual probabilities, I mean the probability P(N > n) given in Result 3, condition (v) of that paper.) Thus I have several questions:
Other interesting questions:
|
I have collected all my questions on the Bernoulli factory problem in one place: |
Issue opened to seek comments or suggestions related to my page on Bernoulli Factory algorithms, which are algorithms to turn coins biased one way into coins biased another way.
https://peteroupc.github.io/bernoulli.html
You can send comments on this document on this issues page. You are welcome to suggest additional Bernoulli factory algorithms, especially specific continued fraction expansions and series expansions for the general martingale and Mendo algorithms.
The text was updated successfully, but these errors were encountered: