diff --git a/docs/images/emissions_schedule.jpg b/docs/images/emissions_schedule.jpg new file mode 100644 index 00000000..444e1f76 Binary files /dev/null and b/docs/images/emissions_schedule.jpg differ diff --git a/docs/tech/1_basics.md b/docs/tech/1_basics.md index a1c6d898..9e4b06c4 100644 --- a/docs/tech/1_basics.md +++ b/docs/tech/1_basics.md @@ -2,14 +2,14 @@ ## 1.1. Overview -The Innovation Game (TIG) is the first coordination protocol designed specifically for algorithmic innovation. At the core of TIG lies a novel variant of proof-of-work called optimisable proof-of-work (OPoW). +The Innovation Game (TIG) is the first and only protocol designed specifically to accelerate algorithmic innovation. At the core of TIG lies a novel variant of proof-of-work called optimisable proof-of-work (OPoW). -OPoW uniquely integrates multiple proof-of-works to be featured, "binding" them in a manner prevents centralization due to optimizations of the proof-of-work algorithms. This resolves a longstanding issue that had hindered proof-of-work from being based on real-world computational scientific challenges. +OPoW uniquely features multiple proof-of-works, “binding” them together in a manner that prevents centralisation due to optimisations of the proof-of-work algorithms (see Section 2.1 of the TIG white paper for details). This resolves a longstanding issue that had hindered proof-of-work from being based on real-world computational scientific challenges. TIG combines a crypto-economic framework with OPoW to: -1. Incentivise miners, referred to as Benchmarkers, to adopt the most efficient proof-of-work algorithms contributed openly to TIG. This incentive is derived from sharing block rewards proportional to the number of solutions found. -2. Incentivise contributors, known as Innovators, to optimise existing proof-of-work algorithms and invent new ones. Their incentive is tied to sharing block rewards based on adoption of their algorithms by Benchmarkers. +1. Incentivise miners, referred to as Benchmarkers, to adopt the most efficient algorithms (for performing proof-of-work) that are contributed openly to TIG. This incentive is derived from sharing block rewards proportional to the number of solutions found. +2. Incentivise contributors, known as Innovators, to optimise existing proof-of-work algorithms and invent new ones. The incentive is provided by the prospect of earning a share of the block rewards based on adoption of their algorithms by Benchmarkers. TIG will progressively phase in proof-of-works over time, directing innovative efforts towards the most significant challenges in science. @@ -32,7 +32,7 @@ A round spans 10,080 blocks, approximately equivalent to 604,800 seconds or 7 da TIG’s token emission schedule comprises 5 tranches, each with the same total emission of 26,208,000 TIG, but successively doubling in duration (measured in rounds): -| **Tranche** | **#Rounds** | **Emissions per block** | **Emissions per round** | **Start Date** | **End Date** | +| **Tranche** | **#Rounds** | **Token emissions per block** | **Token emissions per round** | **Start Date** | **End Date** | | --- | --- | --- | --- | --- | --- | | 1 | 26 | 100 | 1,008,000 | 24 Nov 2023 | 30 May 2024\* | | 2 | 52 | 50 | 504,000 | 31 May 2024\* | 30 May 2025\* | @@ -42,10 +42,12 @@ TIG’s token emission schedule comprises 5 tranches, each with the same total e \*Approximates -Post tranche 5, rewards are solely based on tokens generated from license fees paid by commercial entities purchasing licenses. +![Cumulative token emissions schedule](../images/emissions_schedule.jpg) + +Post tranche 5, rewards are solely based on tokens generated from TIG Commercial license fees. ## 1.5. TIG Token -The [TIG token contract](../tig-token/TIGToken.sol) is derived from OpenZepplin's ERC20 standard and is deployed to Basechain at the following address: [0x0C03Ce270B4826Ec62e7DD007f0B716068639F7B](https://basescan.org/token/0x0C03Ce270B4826Ec62e7DD007f0B716068639F7B). +The TIG token adheres to the ERC20 standard and is deployed to Basechain at the following address: [0x0C03Ce270B4826Ec62e7DD007f0B716068639F7B](https://basescan.org/token/0x0C03Ce270B4826Ec62e7DD007f0B716068639F7B). -TIG tokens earned during a round are minted within 7 days following the conclusion of that round. \ No newline at end of file +TIG tokens earned during a round are minted within 7 days following the conclusion of that round. diff --git a/docs/tech/2_challenges.md b/docs/tech/2_challenges.md index 82994564..c4ee079f 100644 --- a/docs/tech/2_challenges.md +++ b/docs/tech/2_challenges.md @@ -1,9 +1,22 @@ - # 2. Challenges -A challenge within the context of TIG is a computational problem adapted as one of the proof-of-works in OPoW. Presently, TIG features three challenges: boolean satisfiability, vehicle routing, and the knapsack problem. Over the coming year, an additional seven challenges from domains such as AI, cryptography, biomedical research, and climate science will be phased in. +A challenge within the context of TIG is a computational problem adapted as one of the proof-of-works in OPoW. Presently, TIG features three challenges: boolean satisfiability, vehicle routing, and the knapsack problem. Over the coming year, an additional seven challenges from domains including artificial intelligence, biology, medicine, and climate science will be phased in. + +TIG focuses on a category of computational problems that are “asymmetric”. These are problems that require significant computational effort to solve, but once a solution is proposed, verifying its correctness requires relatively trivial computational effort. + +Some areas where asymmetric Challenges potentially suitable for The Innovation Game include: + +- **Mathematical Problems**. There are a great many examples of asymmetric problems in mathematics, from generating a mathematical proof to computing solutions to an equation. Further examples of asymmetric mathematical problems include zero knowledge proof (ZKP) generation, Prime factorisation, and the discrete logarithm problem are further examples of asymmetric problems which have significant implications in cryptography and number theory. NP-complete problems are simple to check and generally considered unsolvable within polynomial time. Such problems are fundamental in science and engineering. Examples include the Hamiltonian Cycle Problem and the Boolean Satisfiability Problem (SAT). + +- **Optimisation Problems**. Optimisation problems are at the core of numerous scientific, engineering, and economic applications. They involve finding the best solution from all feasible solutions, often under a set of constraints. Notable examples include the Travelling Salesman Problem and the Graph Colouring Problem. Optimisation problems are also central to the training of machine learning, and the design of machine learning architectures such as the Transformer neural network architecture. These include gradient descent, backpropagation, and convex optimisation. + +- **Simulations**. Simulations are powerful tools for modelling and understanding complex systems, from weather patterns to financial markets. While simulations themselves may not always be asymmetric problems, simulations may involve solving problems that are asymmetric, and these problems may be suitable for The Innovation Game. For example, simulations often require the repeated numerical solving of equations, where this numerical solving is an asymmetric problem. -Beyond this initial set of ten challenges, TIG's roadmap includes the establishment of a scientific committee tasked with sourcing diverse computational problems. +- **Inverse Problems**. Inverse problems involve deducing system parameters from observed data and are generically asymmetric. These problems are ubiquitous in fields like geophysics, medical imaging, and astronomy. For example, in medical imaging, reconstructing an image from a series of projections is an inverse problem, as seen in computed tomography (CT) scans. + +- **General Computations**. Any calculation can be made efficiently verifiable using a technique called “verifiable computation”. In verifiable computation, the agent performing the computation also generates a proof (such as a zero knowledge proof) that the computation was performed correctly. A verifier can then check the proof to ensure the correctness of the computation without needing to repeat the computation itself. + +Beyond the initial set of challenges, TIG's roadmap includes the establishment of a scientific committee tasked with sourcing diverse computational problems. This chapter covers the following topics: @@ -19,10 +32,8 @@ Each challenge also stipulates the method for verifying whether a "solution" ind Notes: -- The minimum difficulty of each challenge ensures a minimum of 10^15 unique instances, with even more as difficulty increases. - +- The minimum difficulty of each challenge ensures a minimum of 10^15 unique instances. This number increases further as difficulty increases. - Some instances may lack a solution, while others may possess multiple solutions. - - Algorithms are not guaranteed to find a solution. ## 2.2. Adapting Real World Computational Problems @@ -39,88 +50,73 @@ TIG's inclusion of multiple difficulty parameters in proof-of-work sets it apart Notes: - Difficulty parameters are always integers for reproducibility, with fixed-point numbers used if decimals are necessary. - - The expected computational cost to compute a solution rises monotonically with difficulty. ### 2.2.1. Pareto Frontiers & Qualifiers The issue of valuing solutions of different difficulties can be deconstructed into three sub-issues: -1. There is no explicit value function that can "fairly" flatten difficulties onto a single dimension without introducing bias - +1. There is no explicit value function that can ”fairly” flatten difficulties onto a single dimension without introducing bias 2. Setting a single difficulty will avoid this issue, but will excessively limit the scope of innovation for algorithms and hardware +3. Assigning the same value to solutions no matter their difficulty would lead to Benchmarkers “spamming” solutions at the easiest difficulty -3. Assigning the same value to solutions no matter their difficulty would lead to Benchmarkers "spamming" solutions at the easiest difficulty +The key insight behind TIG’s Pareto frontiers mechanism (described below) is that the value function does not have to be explicit, but rather can be fluidly discoverable by Benchmarkers in a decentralised setting by allowing them to strike a balance between the difficulty they select and the number of solutions they can compute. -The key insight behind TIG's Pareto frontiers mechanism (described below) is that the value function does not have to be explicit, but rather can be fluidly discoverable by Benchmarkers in a decentralised setting by allowing them to strike a balance between the difficulty they select and the number of solutions they can compute. - -![Pareto frontier](https://upload.wikimedia.org/wikipedia/commons/2/27/Pareto_Efficient_Frontier_1024x1024.png) - -*Figure: The red line is an example of a Pareto frontier. [Sourced from wikipedia](https://en.wikipedia.org/wiki/Pareto_front)* - -This emergent value function is naturally discovered as Benchmarkers, each guided by their unique value function, consistently select difficulties they perceive as offering the highest value. This process allows them to exploit inefficiencies until they converge upon a set of difficulties where no further inefficiencies remain to be exploited; in other words, staying at the same difficulties becomes more efficient, while increasing or decreasing would be inefficient. +This emergent value function is naturally discovered as Benchmarkers, each guided by their unique “value function”, consistently select difficulties they perceive as offering the highest value. This process allows them to exploit inefficiencies until they converge upon a set of difficulties where no further inefficiencies remain to be exploited; in other words, staying at the same difficulties becomes more efficient, while increasing or decreasing would be inefficient. Changes such as Benchmarkers going online/offline, availability of more performant hardware/algorithms, etc will disrupt this equilibrium, leading to a new emergent value function being discovered. The Pareto frontiers mechanism works as follows: 1. Plot the difficulties for all active solutions or benchmarks. - -2. Identify the hardest difficulties based on the Pareto frontier and designate their solutions as qualifiers. - +2. Identify the hardest difficulties based on the [Pareto frontier](https://en.wikipedia.org/wiki/Pareto_front) and designate their solutions as qualifiers. 3. Update the total number of qualifying solutions. +4. If the total number of qualifiers is below a threshold, repeat the process. -4. If the total number of qualifiers is below a threshold (currently set to `1000`), repeat the process. +\*The threshold number of qualifiers is currently set to 1,000. Notes: - Qualifiers for each challenge are determined every block. - - Only qualifiers are utilised to determine a Benchmarker's influence and an Algorithm's adoption, earning the respective Benchmarker and Innovator a share of the block rewards. - -- The total number of qualifiers may be over the threshold. For example, if the first frontier has `400` solutions, the second frontier has `900` solutions, then there are `1300` qualifiers. +- The total number of qualifiers may be over the threshold. For example, if the first frontier may have 400 solutions, the second frontier may have 900 solutions, then 1300 qualifiers are rewarded ### 2.2.2. Difficulty Adjustment Every block, the qualifiers for a challenge dictate its difficulty range. Benchmarkers, when initiating a new benchmark, must reference a specific challenge and block in their benchmark settings before selecting a difficulty within the challenge's difficulty range. -At a high level, a challenge's difficulty range is determined as follows: - -1. From the qualifiers, filter out the easiest difficulties based on the Pareto frontier to establish the base frontier. +A challenge’s difficulty range is determined as follows: -2. Calculate a difficulty multiplier (capped to `2.0`) - * $difficulty\ multiplier = \frac{number\ of\ qualifiers}{threshold\ number\ of\ qualifiers}$ - * e.g. if there are `1500` qualifiers and the threshold is `1000`, the multiplier is `1500/1000 = 1.5` +1. From the qualifiers, filter out the lowest difficulties based on the Pareto frontier to establish the base frontier. +2. Calculate a difficulty multiplier (capped to 2.0) + 1. difficulty multiplier = number of qualifiers / threshold number of qualifiers + 2. e.g. if there are 1500 qualifiers and the threshold is 1000, the multiplier is 1500/1000 = 1.5 3. Multiply the base frontier by the difficulty multiplier to determine the upper or lower bound. - * If multiplier > 1, base frontier is the lower bound - * If multiplier < 1, base frontier is the upper bound + 1. If multiplier > 1, base frontier is the lower bound + 2. If multiplier < 1, base frontier is the upper bound The following Benchmarker behaviour is expected: -- **When number of qualifiers is higher than threshold:** Benchmarkers will naturally select harder and harder difficulties so that their solutions stay on the frontiers for as long as possible, as only qualifiers count towards influence and share in block rewards. - +- **When number of qualifiers is higher than threshold:** Benchmarkers will naturally select increasingly large difficulties so that their solutions stay on the frontiers for as long as possible, as only qualifiers count towards influence and result in a share of the block rewards. - **When number of qualifiers is equal to threshold:** Benchmarkers will stay at the same difficulty - -- **When number of qualifiers is lower than threshold:** Benchmarkers will naturally select easier and easier difficulties to compute more solutions which will be qualifiers. +- **When number of qualifiers is lower than threshold:** Benchmarkers will naturally select smaller difficulties to compute more solutions which will be qualifiers. ## 2.3. Regulating Verification Load -Verification of solutions constitutes almost the entirety of the computation load for TIG's network. In addition to probabilistic verification which drastically reduces the number of solutions that require verification, TIG employs a solution signature threshold mechanism to regulate the rate of solutions and the verification load of each solution. +Verification of solutions constitutes almost the entirety of the computation load for TIG’s network in the early phase of deployment. In addition to probabilistic verification which drastically reduces the number of solutions that require verification, TIG employs a solution signature threshold mechanism to regulate the rate of solutions and the verification load of each solution. ### 2.3.1. Solution Signature -A solution signature is a unique identifier for each solution derived from hashing the solution and its runtime signature. To be considered valid, this signature must fall below a dynamically adjusted threshold. - -Each challenge possesses its own dynamically adjusted solution signature threshold which starts at `100%` and can be adjusted by a maximum of `0.25%` per block. This percentage reflects the probability of a solution being valid. +A solution signature is a unique identifier for each solution derived from hashing the solution and its runtime signature. TIG requires any submitted solution to have a signature to be below a dynamically adjusted threshold. -Lowering the threshold has the effect of reducing the probability that any given solution will be valid, thereby decreasing the overall solution rate. As a result, the number of qualifiers, and subsequently the difficulty range of the challenge, will also decrease. Increasing the threshold has the opposite effect. +Each challenge possesses its own dynamically adjusted solution signature threshold which begins at 100% and can be adjusted by a maximum of 0.5% per block. The solution signature threshold adjusts the probability of a solution being submittable to TIG. Lowering the threshold has the effect of reducing this probability, thereby decreasing the overall rate of solutions being submitted. As a result the difficulty range of the challenge will also decrease. Increasing the threshold has the opposite effect. There are 2 feedback loops which adjusts the threshold: -1. **Target fuel consumption** (currently disabled). The execution of an algorithm is performed through a WASM Virtual Machine which tracks "fuel consumption", a proxy for the real runtime of the algorithm. Fuel consumption is deterministic and is submitted by Benchmarkers when submitting solutions. +1. **Target fuel consumption** (currently disabled). The execution of an algorithm is performed through a WASM Virtual Machine which tracks “fuel consumption”, a proxy for the real runtime of the algorithm. Fuel consumption is deterministic and is submitted by Benchmarkers when submitting solutions. - Another motivation for targeting a specific fuel consumption is to maintain a fair and decentralised system. If the runtime approaches the lifespan of a solution, raw speed (as opposed to efficiency) would become the dominant factor, potentially giving a significant advantage to hardware (such as supercomputers) that prioritises speed over efficiency. + Another motivation for targeting a specific fuel consumption is to maintain a fair and decentralised system. If the runtime approaches the lifespan of a solution, raw speed (as opposed to efficiency) would become the dominant factor, potentially giving a significant advantage to certain types of specialised hardware architectures (such as those found in “supercomputers”) that prioritise speed over efficiency (which is undesirable). -1. **Target solutions rate.** Solutions rate is determined every block based on mempool proofs that are being confirmed. (Each proof is associated with a benchmark, containing a number of solutions). +1. **Target solutions rate.** Solutions rate is determined every block based on “mempool” proofs that are being confirmed. (Each proof is associated with a benchmark, containing a number of solutions). - Spikes in solutions rate can occur when there is a sudden surge of new Benchmarkers/compute power coming online. If left unregulated, the difficulty should eventually rise such that the solution rate settles to an equilibrium rate, but this may take a prolonged period causing a strain on the network from the large verification load. To smooth out the verification load, TIG targets a specific solutions rate. \ No newline at end of file + Spikes in solutions rate can occur when there is a sudden surge of new Benchmarkers/compute power coming online. If left unregulated, the difficulty should eventually rise such that the solution rate settles to an equilibrium rate, but this may take a prolonged period causing a strain on the network from the large verification load. To smooth out the verification load, TIG targets a specific solutions rate. diff --git a/docs/tech/3_innovators.md b/docs/tech/3_innovators.md index a2a516d6..53209cbf 100644 --- a/docs/tech/3_innovators.md +++ b/docs/tech/3_innovators.md @@ -1,13 +1,13 @@ # 3. Innovators -Innovators are players in TIG who optimise existing proof-of-work algorithms and/or invent new ones, contributing them to TIG in order to share in block rewards. +Innovators are players in TIG who optimise existing proof-of-work algorithms and/or invent new ones, contributing them to TIG in the hope of earning token rewards. This chapter covers the following topics: 1. The two types of algorithm submissions 2. Mechanisms for maintaining a decentralised repository 3. How algorithms are executed by Benchmarkers -4. How algorithms earn block rewards +4. How algorithms earn token rewards ## 3.1. Types of Algorithm Submissions @@ -20,50 +20,48 @@ There are two types of algorithm submissions in TIG: Presently, code submissions are restricted to Rust, automatically compiled into WebAssembly (WASM) for execution by Benchmarkers. Rust was chosen for its performance advantages over other languages, enhancing commercial viability of algorithms contributed to TIG, particularly in high-performance computing. Future iterations of TIG will support additional languages compilable to WASM. -**Breakthrough submissions** involve the introduction of novel algorithms tailored to solve TIG's proof-of-work challenges. A breakthrough submission is expected to yield such a significant performance enhancement that even unoptimised code of the new algorithm outpaces the most optimised code of an existing one. +**Breakthrough submissions** involve the introduction of novel algorithms tailored to solve TIG's proof-of-work challenges. A breakthrough submission will often yield such a significant performance enhancement that even unoptimised code of the new algorithm outpaces the most optimised code of an existing one. -Support for breakthrough submissions is on TIG's roadmap. +Note: Support for breakthrough submissions is not currently in place but will be available in the coming months (pending a sufficiently wide token distribution). ## 3.2. Decentralised Repository -Algorithms are contributed to a repository devoid of a centralised gatekeeper. TIG addresses crucial issues such as spam and piracy to ensure fair rewards for Innovators based on performance, maintaining a strong incentive for innovation. +Algorithms are contributed to a repository without a centralised gatekeeper. TIG addresses crucial issues such as spam and piracy to ensure fair rewards for Innovators based on performance, maintaining a strong incentive for innovation. To combat spam, Innovators must pay a submission fee of 0.001 ETH, burnt by sending it to the null address (0x0000000000000000000000000000000000000000). In the future, this fee will be denominated in TIG tokens. -To counter piracy concerns, TIG implements a push delay and merge points mechanism: +To address the possibility of piracy and to provide an opportunity for IP protection, TIG implements a “push delay” and “merge points” mechanism: ### 3.2.1. Push Delay Mechanism -Upon submission, algorithms are committed to their own branch and pushed to a private repository. Following successful compilation into WebAssembly (WASM), a delay of `3` rounds ensues before the algorithm is made public where the branch is pushed to TIG's public repository. This delay safeguards Innovators' contributions, allowing them time to benefit before others can optimise upon or pirate their work. +Upon submission, algorithms are committed to their own branch and pushed to a private repository. Following successful compilation into WebAssembly (WASM), a delay of 3 rounds ensues before the algorithm is made public where the branch is pushed to TIG’s public repository. This delay safeguards Innovators' contributions, allowing them time to benefit before others can optimise upon or pirate their work. Notes: - Confirmation of an algorithm's submission occurs in the next block, determining the submission round. - -- An algorithm submitted in round `X` is made public at the onset of round `X + 3`. +- An algorithm submitted in round X is made public at the onset of round X + 3. ### 3.2.2. Merge Points Mechanism -This mechanism aims to deter algorithm piracy. For every block in which an algorithm achieves at least `25%` adoption, it earns a merge point alongside a share of the block reward based on its adoption. +This mechanism aims to deter algorithm piracy. For every block in which an algorithm achieves at least 25% adoption, it earns a merge point alongside a share of the block reward based on its adoption. -At the end of every round, the algorithm from each challenge with the most merge points (exceeding a minimum threshold of `5040`) is merged into the repository's main branch. Merge points reset each round. +At the end of each round, the algorithm from each challenge with the most merge points (exceeding a minimum threshold of 5,040) is merged into the repository's main branch. Merge points reset each round. -Merged algorithms, as long as their adoption is above `0%`, share in block rewards every block. +Merged algorithms, as long as their adoption is above 0%, share in block rewards every block. -The barrier to getting merged is intentionally set high as to minimise the likely payoff for pirating algorithms. +The barrier for an Innovator contribution to be merged is intentionally chosen to be relatively high to minimise the likely payoff for pirating algorithms. -For breakthrough submissions, the vote for recognising the algorithm as a breakthrough starts only when its code gets merged (details to come). This barrier is based on TIG's expectation that breakthroughs will demonstrate distinct performance improvements, ensuring high adoption even in unoptimised code. +For algorithmic breakthrough submissions, the vote for recognising the algorithm as a breakthrough starts only when its code gets merged (details to come). This barrier is based on TIG’s expectation that breakthroughs will demonstrate distinct performance improvements, ensuring high adoption even in unoptimised code. ## 3.3. Deterministic Execution -Algorithms in TIG are compiled into WebAssembly (WASM), facilitating execution by a corresponding WASM Virtual Machine. This environment, based on wasmi developed by parity-labs for blockchain applications, enables tracking of fuel consumption, imposition of memory limits, and has tools for deterministic compilation. +Algorithms in TIG are compiled into WebAssembly (WASM), facilitating execution by a corresponding WASM Virtual Machine. This environment, based on wasmi developed by Parity Technologies for blockchain applications, enables tracking of fuel consumption, imposition of memory limits, and has tools for deterministic compilation. Benchmarkers must download the WASM blob for their selected algorithm from TIG's repository before executing it using TIG's WASM Virtual Machine. Notes: - The WASM Virtual Machine functions as a sandbox environment, safeguarding against excessive runtime, memory usage, and malicious actions. - - Advanced Benchmarkers may opt to compile algorithms into binary executables for more efficient nonce searches, following thorough vetting of the code. ### 3.3.1. Runtime Signature @@ -74,6 +72,6 @@ As an algorithm is executed by TIG's WASM Virtual Machine, a "runtime signature" TIG incentivises algorithm contributions through block rewards: -- `15%` of block rewards are allocated evenly across challenges with at least one "pushed" algorithm before distributing pro-rata based on adoption rates. - -- In the future, a fixed percentage will be assigned to the latest breakthrough for each challenge. In the absence of a breakthrough, this percentage reverts back to the Benchmarkers' pool. Given the rarity of breakthroughs, this represents a significant reward, reflecting TIG's emphasis on breakthrough innovations. \ No newline at end of file +- 15% of block rewards are allocated evenly across challenges with at least one "pushed" algorithm before distributing pro-rata based on adoption rates. +- In the future, a fixed percentage (we intend 15% of block rewards, see below) will be assigned to the latest algorithmic breakthrough for each challenge. In the absence of a breakthrough, this percentage reverts back to the Benchmarkers' pool. Given the expected relative rarity of algorithmic breakthroughs (compared to code optimisations), this represents a significant reward, reflecting TIG's emphasis on breakthrough innovations. +- When the rewards stream for algorithmic breakthroughs is introduced, there will be a total of 30% of block rewards for Innovators and 70% for Benchmarkers. Over time, we intend for the percentage of block rewards for Innovators to approach 50%. diff --git a/docs/tech/4_benchmarkers.md b/docs/tech/4_benchmarkers.md index 8c46a4d2..360ad139 100644 --- a/docs/tech/4_benchmarkers.md +++ b/docs/tech/4_benchmarkers.md @@ -1,4 +1,4 @@ -# 4\. Benchmarkers +# 4. Benchmarkers Benchmarkers are players in TIG who continuously select algorithms to compute solutions for challenges and submit them to TIG through benchmarks and proofs to earn block rewards. @@ -13,9 +13,9 @@ This chapter covers the following topics: The process of benchmarking comprises 3 steps: -1. Picking benchmark settings +1. Selecting benchmark settings 2. Generate challenge instances -3. Execute algorithm on instances & record solutions +3. Execute algorithm on instances and record solutions Apart from algorithm selection, this process is entirely automated by the browser benchmarker. @@ -29,35 +29,32 @@ A Benchmarker must select their settings, comprising 5 fields, before benchmarki 4. Block Id 5. Difficulty -**Player Id** is the address of the Benchmarker. It prevents fraudulent re-use of solutions computed by another Benchmarker. +**Player Id** is the address of the Benchmarker. This prevents fraudulent re-use of solutions computed by another Benchmarker. -**Challenge Id** is the proof-of-work that the Benchmarker wants to compute solutions for. The challenge must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on minimising their imbalance. Imbalance minimisation is the default strategy for the browser benchmarker. +**Challenge Id** identifies the proof-of-work challenge for which the Benchmarker is attempting to compute solutions. The challenge must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on minimising their imbalance. Note: Imbalance minimisation is the default strategy for the browser benchmarker. -**Algorithm Id** is the proof-of-work algorithm that the Benchmarker wants to use to compute solutions. The algorithm must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on the algorithm's performance in computing solutions. +**Algorithm Id** is the proof-of-work algorithm that the Benchmarker wants to use to compute solutions. The algorithm must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on the algorithm’s performance in computing solutions. -**Block Id** is a reference block from which the lifespan of the solutions starts counting down. Benchmarkers are incentivised to reference the latest block as to maximise the remaining lifespan of any computed solutions. +**Block Id** is a reference block from which the lifespan of the solutions begins counting down. Benchmarkers are incentivised to reference the latest block as to maximise the remaining lifespan of any computed solutions. -**Difficulty** is the difficulty of the challenge instances that the Benchmarker is computing solutions for. The difficulty must lie within the valid range of the challenge for the referenced block. Benchmarkers are incentivised to make their selection to strike a balance between the number of blocks their solution will be a qualifier, and the number of solutions they can compute. (e.g. lower difficulty may mean more solutions, but may lower the number of blocks that the solutions are qualifiers) +**Difficulty** is the difficulty of the challenge instances for which the Benchmarker is attempting to compute solutions. The difficulty must lie within the valid range of the challenge for the referenced block. Benchmarkers are incentivised to make their selection to strike a balance between the number of blocks for which their solution will remain a qualifier, and the number of solutions they can compute. (e.g. lower difficulty may mean more solutions, but may lower the number of blocks that the solutions remain qualifiers) ### 4.1.2. Unpredictable Challenge Instances TIG makes it intractable for Benchmarkers to attempt to re-use solutions by: 1. Challenge instances are deterministically pseudo-randomly generated, with at least 10^15 unique instances even at minimum difficulty. - 2. Instance seeds are computed by hashing benchmark settings and XOR-ing with a nonce, ensuring randomness. During benchmarking, Benchmarkers iterate over nonces for seed and instance generation. ### 4.1.3. Algorithm Execution -The compiled WebAssembly (WASM) blobs for active algorithms can be downloaded from TIG's repository: +Active algorithms reside as compiled WebAssembly (WASM) blobs in TIG's open repository. -``` -/tig-algorithms/wasm/.wasm +https://raw.githubusercontent.com/tig-foundation/tig-monorepo/**_<branch>_**/tig-algorithms/wasm/**_<branch>_**.wasm -where is / -``` +where <branch> is <challenge_name>/<algorithm_name> Benchmarkers download the relevant WASM blob for their chosen algorithm, execute it using TIG's WASM Virtual Machine with specified seed and difficulty inputs. @@ -97,13 +94,16 @@ A benchmark, a lightweight batch of valid solutions found using identical settin Upon benchmark submission, it enters the mempool for inclusion in the next block. When the benchmark is confirmed into a block, up to three unique nonces are sampled from the metadata, and corresponding solution data must be submitted by Benchmarkers. -TIG refers to this sampling as probabilistic verification, and ensures its unpredictability by using both the new block id and benchmark id in seeding the pseudo-random number generator. Probabilistic verification not only drastically reduces the amount of solution data that gets submitted to TIG, but makes it irrational to fraudulently pad a benchmark with fake solutions: +TIG refers to this sampling as probabilistic verification, and ensures its unpredictability by using both the new block id and benchmark id in seeding the pseudo-random number generator. Probabilistic verification not only drastically reduces the amount of solution data that gets submitted to TIG, but also renders it irrational to fraudulently “pad” a benchmark with fake solutions: + +If a Benchmarker computes N solutions, and pads M fake solutions to the benchmark for a total of N + M solutions, then the chance of getting away with this is $\left(\frac{N}{N+M}\right)^3$. The expected payoff for honesty (N solutions always accepted) is always greater than the payoff for fraudulence (N+M solutions sometimes accepted): -If a Benchmarker computes `N` solutions, and pads `M` fake solutions to the benchmark for a total of `N + M` solutions, then the chance of getting away with this is $\left(\frac{N}{N+M}\right)^3$. The expected payoff for honesty (`N` solutions always accepted) is always greater than the payoff for fraudulence (`N + M` solutions sometimes accepted): +$$N > (N+M) \cdot \left(\frac{N}{N+M}\right)^3$$ -$$N > (N + M) \cdot \left(\frac{N}{N+M}\right)^3$$ $$1 > \left(\frac{N}{N+M}\right)^2$$ +Note that N is always smaller than N + M. + ### 4.2.3. Submitting Proof A proof includes the following fields: @@ -113,13 +113,13 @@ A proof includes the following fields: **Benchmark id** refers to the benchmark for which a proof is being submitted. Only one proof can be submitted per benchmark. -**Array of solution data** must correspond to the nonces sampled from the benchmark's solutions metadata. +**Array of solution data** must correspond to the nonces sampled from the benchmark’s solutions metadata. ### 4.2.4. Submission Delay & Lifespan mechanism Upon confirmation of a proof submission, a submission delay is determined based on the block gap between when the benchmark started and when its proof was confirmed. -A submission delay penalty is calculated by multiplying the submission delay by a multiplier (currently set to `3`). If the penalty is `X` and the proof was confirmed at block `Y`, then the benchmark's solutions only become "active" (eligible to potentially be qualifiers and share in block rewards) from block `X + Y` onwards. +A submission delay penalty is calculated by multiplying the submission delay by a multiplier (currently set to 3). If the penalty is X and the proof was confirmed at block Y, then the benchmark’s solutions only become “active” (eligible to potentially be qualifiers and share in block rewards) from block X + Y onwards. As TIG imposes a lifespan, the maximum number of blocks that a solution can be active (currently set to 120 blocks), there is a strong incentive for Benchmarkers to submit solutions as soon as possible. @@ -128,17 +128,16 @@ As TIG imposes a lifespan, the maximum number of blocks that a solution can be a Two types of verification are performed on solutions submitted to TIG to safeguard algorithm adoption against manipulation: 1. Verification of serialised solutions against challenge instances, triggered during benchmark and proof submission. - 2. Verification of the algorithm that the Benchmarker claims to have used, involving re-running the algorithm against the challenge instance before checking that the same solution data is reproduced. -If verification fails, the benchmark is flagged as fraudulent, disqualifying its solutions. In the future a slashing penalty will be applied. +If verification fails, the benchmark is flagged as fraudulent, disqualifying its solutions. In the future (when Benchmarker deposits are introduced) a slashing penalty will be applied. ## 4.4. Sharing in Block Rewards -Every block, `85%` of block rewards are distributed pro-rata amongst Benchmarkers based on influence. A Benchmarker's influence is based on their fraction of qualifying solutions across challenges with only active solutions eligible. +Every block, 85% of block rewards are distributed pro-rata amongst Benchmarkers based on influence. A Benchmarker’s influence is based on their fraction of qualifying solutions across challenges with only active solutions eligible. ### 4.4.1. Cutoff Mechanism -To strongly disincentivise Benchmarkers from focusing only a single challenge (e.g. benchmarking their own algorithm), TIG employs a cutoff mechanism. This mechanism limits the maximum qualifiers per challenge based on an average number of solutions multiplied by a multiplier (currently set to `2.5`). +To strongly disincentivise Benchmarkers from focusing only on a single challenge (e.g. benchmarking their own algorithm), TIG employs a cutoff mechanism. This mechanism limits the maximum qualifiers per challenge based on an average number of solutions multiplied by a multiplier (currently set to 2.5). -The multiplier is such that the cutoff mechanism will not affect normal benchmarking in `99.9%` of cases. \ No newline at end of file +The multiplier is such that the cutoff mechanism will not affect normal benchmarking in 99.9% of cases. diff --git a/docs/tech/5_opow.md b/docs/tech/5_opow.md index d71272eb..9cc5b348 100644 --- a/docs/tech/5_opow.md +++ b/docs/tech/5_opow.md @@ -1,33 +1,45 @@ - # 5. Optimisable Proof-of-Work -Optimisable proof-of-work (OPoW) distinctively requires multiple proof-of-works to be featured, "binding" them in such a way that optimisations to the proof-of-work algorithms do not cause instability/centralisation. This binding is embodied in the calculation of influence for Benchmarkers. The adoption of an algorithm is then calculated using each Benchmarker's influence and the fraction of qualifiers they computed using that algorithm. +Optimisable proof-of-work (OPoW) distinctively requires multiple proof-of-works to be featured, “binding” them in such a way that optimisations to the proof-of-work algorithms do not cause instability/centralisation. This binding is embodied in the calculation of influence for Benchmarkers. The adoption of an algorithm is then calculated using each Benchmarker’s influence and the fraction of qualifiers they computed using that algorithm. ## 5.1. Influence -OPoW introduces a novel metric, imbalance, aimed at quantifying the degree to which a Benchmarker contributes to centralization by leveraging statistical measures. This is only possible when there are multiple proof-of-works. +OPoW introduces a novel metric, imbalance, aimed at quantifying the degree to which a Benchmarker spreads their computational work between challenges unevenly. This is only possible when there are multiple proof-of-works. -The metric is defined as $imbalance = \frac{CV(\%qualifiers)^2}{N - 1}$ where `CV` is coefficient of variation, `%qualifiers` is the fraction of qualifiers found by a Benchmarker across challenges, and `N` is the number of active challenges. This metric ranges from `0 to 1`, where lower values signify less centralisation. +The metric is defined as $imbalance = \frac{CV(\bf{\%qualifiers})^2}{N-1}$ where CV is coefficient of variation, %qualifiers is the fraction of qualifiers found by a Benchmarker across challenges, and N is the number of active challenges. This metric ranges from 0 to 1, where lower values signify less centralisation. -Penalising imbalance is achieved through $imbalance\_penalty = 1 - \frac{1}{1 + k \cdot imbalance}$, where `k` is a multiplier (currently set to `1.5`). The penalty ranges from `0 to 1`, where zero signifies no penalty. +Penalising imbalance is achieved through $imbalance\textunderscore{ }penalty = 1 - exp(-k \cdot imbalance)$, where k is a coefficient (currently set to 1.5). The modifier ranges from 1 to 0, where 0 signifies no penalty. When block rewards are distributed pro-rata amongst Benchmarkers based on influence, where influence has imbalance penalty applied, the result is that Benchmarkers are incentivised to minimise their imbalance as to maximise their earnings: -$$weight = mean(\%qualifiers) \cdot (1 - imbalance\_penalty)$$ -$$influence = normalise(weights\ across\ benchmarkers)$$ +$$weight = mean(\bf{\%qualifiers}) \cdot (1 - imbalance\textunderscore{ }penalty)$$ +$$influence = \frac{\bf{weights}}{sum(\bf{weights})}$$ -Notes: +Where: +* $\bf{\%qualifiers}$ is particular to a Benchmarker, where elements correspond to the fraction of qualifiers found by a Benchmarker for a challenge +* $\bf{weights}$ is a set, where elements correspond to the weight for a particular Benchmarker -- A Benchmarker focusing solely on a single challenge will exhibit a maximum imbalance value of `1`. +Notes: -- Conversely, a Benchmarker with an equal fraction of qualifiers across all challenges will demonstrate a minimum imbalance value of `0`. +- A Benchmarker focusing solely on a single challenge will exhibit a maximum imbalance and therefore maximum penalty. +- Conversely, a Benchmarker with an equal fraction of qualifiers across all challenges will demonstrate a minimum imbalance value of 0. ## 5.2. Adoption -Any active solution can be assumed to already have undergone verification of the algorithm used. This allows the straightforward use of Benchmarkers' influence for calculating algorithm adoption: +Any active solution can be assumed to already have undergone verification of the algorithm used. This allows the straightforward use of Benchmarkers' influence for calculating an algorithm's weight: + +$$weight = sum(\bf{influences} \cdot \bf{algorithm\textunderscore{ }\%qualifiers})$$ + +Where: +* $\bf{influences}$ is a set, where elements correspond to the influence for a particular Benchmarker +* $\bf{algorithm\textunderscore{ }\%qualifiers}$ is a set, where elements correspond to the fraction of qualifiers found by a Benchmarker using a particular algorithm + +Then, for each challenge, adoption is calculated: + +$$adoption = \frac{\bf{weights}}{sum(\bf{weights})}$$ -$$weight = sum(influence \cdot \%qualifiers) \cdot (1 - imbalance\_penalty)$$ -$$adoption = normalise(weights\ across\ algorithms\ for\ a\ challenge)$$ +Where: +* $\bf{\%weights}$ is a set, where elements correspond to the weight for a particular algorithm -By integrating influence into the adoption calculation, TIG effectively guards against potential manipulation by Benchmarkers. \ No newline at end of file +By integrating influence into the adoption calculation, TIG effectively guards against potential manipulation by Benchmarkers. diff --git a/docs/tech/6_q_and_a.md b/docs/tech/6_q_and_a.md deleted file mode 100644 index b3a42524..00000000 --- a/docs/tech/6_q_and_a.md +++ /dev/null @@ -1 +0,0 @@ -placeholder \ No newline at end of file diff --git a/docs/tech/6_roadmap.md b/docs/tech/6_roadmap.md new file mode 100644 index 00000000..df62e909 --- /dev/null +++ b/docs/tech/6_roadmap.md @@ -0,0 +1,15 @@ + +# 6. Roadmap + +The technical roadmap for TIG in 2024 is as follows: + +| **Feature** | **Approximate Date** | +| --- | --- | +| New challenge (c004) | July | +| Locked Deposits (necessary for voting) | Aug | +| New challenge (c005) | Sep | +| Benchmarker Staking | Oct | +| Breakthrough Voting | Nov | +| New challenge (c006) | Dec | + +TIG intends to migrate to a L1 blockchain where OPoW is integrated with the consensus layer in 2025. This will leverage Polkadot’s substrate. (TIG is currently running in an off-chain execution & on-chain settlement configuration.) \ No newline at end of file