Skip to content

Commit

Permalink
rename MODEL advanced to parse with layout
Browse files Browse the repository at this point in the history
  • Loading branch information
Sdddell committed Nov 1, 2024
1 parent 7f137d1 commit a810851
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 10 deletions.
2 changes: 1 addition & 1 deletion any_parser/any_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,7 @@ def async_extract(
process_type = ProcessType.FILE
elif model == ModelType.PRO:
process_type = ProcessType.FILE_REFINED_QUICK
elif model == ModelType.ADVANCED:
elif model == ModelType.PARSE_WITH_LAYOUT:
process_type = ProcessType.PARSE_WITH_LAYOUT
else:
return "Error: Invalid model type", None
Expand Down
4 changes: 2 additions & 2 deletions any_parser/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
class ModelType(Enum):
BASE = "base"
PRO = "pro"
ADVANCED = "advanced"
PARSE_WITH_LAYOUT = "parse_with_layout"


SUPPORTED_FILE_EXTENSIONS = [
Expand Down Expand Up @@ -49,7 +49,7 @@ def upload_file_to_presigned_url(


def check_model(model: ModelType) -> None:
if model not in {ModelType.BASE, ModelType.PRO, ModelType.ADVANCED}:
if model not in {ModelType.BASE, ModelType.PRO, ModelType.PARSE_WITH_LAYOUT}:
valid_models = ", ".join(["`" + model.value + "`" for model in ModelType])
return f"Invalid model type: {model}. Supported `model` types include {valid_models}."

Expand Down
19 changes: 12 additions & 7 deletions examples/async_parse_with_layout.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
"metadata": {},
"outputs": [],
"source": [
"ap = AnyParser(api_key=\"S4iyw7RAEE8CTGkVgHYeI8nsTmSALI1U2HXvAN6j\")"
"ap = AnyParser(api_key=\"...\")"
]
},
{
Expand All @@ -37,18 +37,23 @@
"outputs": [],
"source": [
"file_path = \"./sample_data/test_1figure_1table.png\"\n",
"file_id = ap.async_extract(file_path, ModelType.ADVANCED, {})"
"file_id = ap.async_extract(file_path, ModelType.PARSE_WITH_LAYOUT, {})"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Waiting for response...\n",
"Waiting for response...\n",
"Waiting for response...\n",
"Waiting for response...\n",
"Waiting for response...\n",
"Waiting for response...\n",
"Waiting for response...\n"
]
Expand All @@ -60,7 +65,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -92,17 +97,17 @@
"\n",
"5.2 Availability\n",
"\n",
"Figure 5 illustrates the availability benefits of running Spanner in multiple datacenters. It shows the results of three experiments on throughput in the presence of datacenter failure, all of which are overlaid onto the same time scale. The test universe consisted of 5 zones Zi, each of which had 25 spanservers. The test database was sharded into 1250 Paxos groups, and 100 test clients constantly issued non-snapshot reads at an aggregate rate of 50K reads/second. All of the leaders were explicitly placed in Z1. Five seconds into each test, all of the servers in one zone were killed: non-leader kills Z2; leader-hard kills Z1; leader-soft kills Z1, but it gives notifications to all of the servers that they should handoff leadership first.\n",
"Figure 5 illustrates the availability benefits of running Spanner in multiple datacenters. It shows the results of three experiments on throughput in the presence of datacenter failure, all of which are overlaid onto the same time scale. The test universe consisted of 5 zones Z_i, each of which had 25 spanservers. The test database was sharded into 1250 Paxos groups, and 100 test clients constantly issued non-snapshot reads at an aggregate rate of 50K reads/second. All of the leaders were explicitly placed in Z_1. Five seconds into each test, all of the servers in one zone were killed: non-leader kills Z_2; leader-hard kills Z_1; leader-soft kills Z_1, but it gives notifications to all of the servers that they should handoff leadership first.\n",
"\n",
"Killing Z2 has no effect on read throughput. Killing Z1 while giving the leaders time to handoff leadership to a different zone has a minor effect: the throughput drop is not visible in the graph, but is around 3-4%. On the other hand, killing Z1 with no warning has a severe effect: the rate of completion drops almost to 0. As leaders get re-elected, though, the throughput of the system rises to approximately 100K reads/second because of two artifacts of our experiment: there is extra capacity in the system, and operations are queued while the leader is unavailable. As a result, the throughput of the system rises before leveling off again at its steady-state rate.\n",
"Killing Z_2 has no effect on read throughput. Killing Z_1 while giving the leaders time to handoff leadership to a different zone has a minor effect: the throughput drop is not visible in the graph, but is around 3-4%. On the other hand, killing Z_1 with no warning has a severe effect: the rate of completion drops almost to 0. As leaders get re-elected, though, the throughput of the system rises to approximately 100K reads/second because of two artifacts of our experiment: there is extra capacity in the system, and operations are queued while the leader is unavailable. As a result, the throughput of the system rises before leveling off again at its steady-state rate.\n",
"\n",
"We can also see the effect of the fact that Paxos leader leases are set to 10 seconds. When we kill the zone, the leader-lease expiration times for the groups should be evenly distributed over the next 10 seconds. Soon after each lease from a dead leader expires, a new leader is elected. Approximately 10 seconds after the kill time, all of the groups have leaders and throughput has recovered. Shorter lease times would reduce the effect of server deaths on availability, but would require greater amounts of lease-renewal network traffic. We are in the process of designing and implementing a mechanism that will cause slaves to release Paxos leader leases upon leader failure.\n",
"\n",
"5.3 TrueTime\n",
"\n",
"Two questions must be answered with respect to TrueTime: is ε truly a bound on clock uncertainty, and how bad does ε get? For the former, the most serious problem would be if a local clock’s drift were greater than 200us/sec: that would break assumptions made by TrueTime. Our machine statistics show that bad CPUs are 6 times more likely than bad clocks. That is, clock issues are extremely infrequent, relative to much more serious hardware problems. As a result, we believe that TrueTime’s implementation is as trustworthy as any other piece of software upon which Spanner depends.\n",
"\n",
"![<@mask_p0_e1_figure(timeout=1h)>](https://anyparser-realtime-test-j-assetsconstructfilebucke-2wg0ln280yvz.s3.amazonaws.com/result_parse_with_layout/async_S4iyw7RAEE8CTGkVgHYeI8nsTmSALI1U2HXvAN6j/2024/10/30/test_1figure_1table_215dd6ed-92a2-4636-8dc0-5636c689bf4b.png/%3C%40mask_p0_e1_figure_s3%3E.png?AWSAccessKeyId=ASIAXM24X76XBW2GFBFU&Signature=FyIrRoyyRfiKirQbeuVzNzXAowQ%3D&x-amz-security-token=IQoJb3JpZ2luX2VjEPT%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLXdlc3QtMiJHMEUCIQCRUsV%2FHrEppoMWhVoou%2Ft2j%2FrgGV61zgYoeuLwFbKLcAIgKfL4H8nBkzZt2QWR0vm8ZjbKGJclV64yFzh%2FhHChK7Mq0wMIbRAAGgw1MDg2MTEyNjQ0MzAiDDPz9lhEp9Z9OwEe6SqwA6bt%2BuJ8wJD2UQ2OJpUYVwzjhPPKupLRyj0QWt6zNTg%2F%2BKYcLg6mx7s47rmNFpVAjr28CFcX8DxU9DvgILucIPPs2FCvxMoGpXGmrerdRONK1hbSWiEsaiVd7%2B4AW27R5omTB%2F1%2Fk84WUBZRuWPCGn8dZPqAZaBM4dSyUMBjnoTKBmaytwbxTOLQ9mvil%2Fj7JjZaeOtuBWGN8M8GPikXwTtXiJDFqofsWn8hRzMsMYRrueamEn%2BRZ07l9FIie4VRYNYuQDE5DkVI5misl9lpXY1OawATogpCj8SBI%2BwqZUpVe89qj67cMCKH78vAdBqbteNy14M36DIhdiRrVOXzoyibQW4I4jp9wtDQdSbPjS9RqRC36CLfseN%2BVzjWzYxhGoI2sPF5NMPHVGPMSq6VcCA3G%2FW85FLER6XRKCdM03%2FLrt1G7ocowIsbVHE%2BP%2F0jh5Bg8ZQyd%2FUb0yS%2FJQMbQzM72qZOP8nMfl4eytljo4vMDwwP9485uPqM1wcHGrMtk48vqlgcM9Br5cQtikV3q4Q8O3ND9G5xmICPO4LhfE%2B9XeKkXdekRBQ7qTj6ub%2BxEDDS2oa5BjqeAXJbS7XvEc73%2BsWZwamhZLO5Nia3uJDJY5qGjDGEieB%2FhtpLyszAJGHLkJKSaGBM8kwyiPGybwidi7vFcGBZ08CBk7RXVQFpFWmSKtRfbVqOCubcsmgfTUjb7UOfRz3swIvEt3kjazXLowhvy9U1%2B3ISQohEe80zKSJHbc0nE75%2F%2BP97Cl6%2FAVsFqKMZPLcsDrooFR3M5WBPUqebcJQ%2F&Expires=1730264238)\n",
"![<@mask_p0_e1_figure(timeout=1h)>](https://anyparser-realtime-test-j-assetsconstructfilebucke-2wg0ln280yvz.s3.amazonaws.com/result_parse_with_layout/async_S4iyw7RAEE8CTGkVgHYeI8nsTmSALI1U2HXvAN6j/2024/11/1/test_1figure_1table_f685e88d-d27f-4f1b-9f6c-f03a1fc9ae83.png/%3C%40mask_p0_e1_figure_s3%3E.png?AWSAccessKeyId=ASIAXM24X76XOK7ITU5J&Signature=CLJE%2BOeN6U%2F49Jkd3xm%2FZvguPTM%3D&x-amz-security-token=IQoJb3JpZ2luX2VjECQaCXVzLXdlc3QtMiJHMEUCIQD4c8%2BB73pmEE8VT5NHxpyJlvnko7TgUhRp17lxlf0n1AIgKUhyP7tkU7TiMOraliELOnkaiGJmpFnx8DNKXF8Cq84q3AMInf%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARAAGgw1MDg2MTEyNjQ0MzAiDKBsIf2UfBx%2BZLkPHCqwA01i7fZyNVUUsZTGQo8kILSfH1ueDrR3JwOdux0bJzq%2BL2g5lx9LpsEz5BL%2BZg7RwkqQwK6AHHNunLmiBU96QfW%2FwvBItqPg%2FzIoAHbmS4WLksKm6c1zXG34etNqrXWgoJ%2FUt3qshGP%2F5TcQYXuIXk%2FL%2Fh8%2Bd2rDTtRDovpWewsPp7hLydxqgNfDihtsY0UzoPKlYK5zzbBShhoqrG3y5AGBF1F9Q%2FFByeW7PiH4OZEwMeQ%2FNZBTNKJ%2BI92iEWbRT0av4t10zL5jlS%2FnH20RBCQsFE%2F5J7Vf4oJyQL8tCMlUHScHqsMvkH%2BEZ4LIxF2cgHwXbzCKWhmI6H44nNC9DM2Ivhy40ETbvPYi3y%2BRgUXxabdnLmmCjz1ls%2Bbqnw1TDx9JjD693KSOSuW7qOikIduS8j4YEdinzKxr6a01JBOeHwb3zUFVwprhqOR2yGy%2FaPjYZN8nuUaH5muRt0KCZTudRvRYobCaxCrXi1I6cicmEPxreaDS43EpIiqfI9n1bhZPNE%2FYqzDvOXZjFM3%2Bcqa1Wwhyiywhv0I0xE%2Ftl%2B5jQe1hH4invJA2H%2FUCZ2vDbDCVlpG5BjqeAQX7PKBMX2QivsKT2kTvqP1F2ByedRRh0tVPBXVyVKudp4skyHUUq8GvCxSLmlH4fwS5KPXFDC6ehQM2RXuHrdkgDhQzGsf15ZwMN%2Bq9aKkqqzXE6U0Ekp8B3Zg4xx8PmlftrHhMpGQCPMz8SPQaT2n9%2B1Aredixh4gT0%2BAQwvcprapl5AAtYLFyMDyj8T8UKTwn2eJ8%2FiR5r1STBOQA&Expires=1730435409)\n",
"\n",
"Figure 5: Effect of killing servers on throughput.\n"
],
Expand Down

0 comments on commit a810851

Please sign in to comment.