From d87f54fe28d1d79b8c07f978140f90a566d5061b Mon Sep 17 00:00:00 2001
From: nic-chen <33000667+nic-chen@users.noreply.github.com>
Date: Mon, 6 Jul 2020 19:04:41 +0800
Subject: [PATCH] sync (#11)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
* feature: implemented plugin `sys logger`. (#1414)
* bugfix(CORS): using rewrite phase and add lru cache for multiple origin (#1531)
* change: updated the dashboard submodule to latest version. (#1540)
* doc: alter logger plugins documentations. (#1541)
* bugfix: Adding function to remove stale objects from kafka logger (#1526)
* bugfix: removed stale objects from tcp logger (#1543)
* bugfix: removing stale objects from udp logger (#1544)
* optimize: use buffer for plugin `syslog`. (#1551)
* plugin: add HTTP logger for APISIX (#1396)
* bugfix: got 500 error when using post method in grpc-transcode plugin(#1566)
* bugfix: removed stale object in sys log. (#1557)
* feature(prometheus): support to collect metric `overhead` (#1576)
Fix #1534 .
* feature: support new field `exptime` for SSL object. (#1575)
fix #1571.
* doc: Added FAQ about how to reload your own plugin (#1568)
* doc: repair the white paper's url of README (#1582)
* chore: fix function name typo in ip-restriction (#1586)
* doc: added http logger Chinese docs (#1581)
* feature: support discovery center (#1440)
* doc:add chinese version for install doc (#1590)
* bugfix: incorrect variable name `hostCount` (#1585)
* doc: update kakfa logger plugin's cn version (#1594)
* doc: fix the doc style for *_logger.md (#1605)
* bugfix: raise error when none of the configured etcd can be connected (#1608)
Close #1561.
* test: updated style. (#1606)
* release: released 1.3 version. (#1558)
* bugfix(CLI): fixed garbled Chinese response in browser. (#1598)
fix #1559
* change: updated prometheus to version 1.1 . (#1607)
* doc: add asf.yaml. (#1612)
* fix some doc style for response-rewrite* and health-check.md (#1611)
* makefile: add default check for install command (#1591)
* test cases: add doc and test cases for how to redirect http to https. (#1595)
* add FAQ about redirect http To https
* add test cases for serverless plugin and redirect plugin
Co-authored-by: rhubard <18734141014@163.com>
* feature: add skywalking plugin. (#1241)
* doc: removed external links and docs. (#1619)
* doc: add coc file (#1589)
* bugfix: change the version of skywalking to 1.0-0 (#1624)
* bugfix(prometheus): the `overhead` should use milliseconds. #1615 (#1616)
Fix #1615
* feature: add option to include request body in log util (#1545)
* bugfix: fix typo of `instance_id` in skywalking plugin. (#1629)
* doc: added the link to discovery.md (#1631)
* change(ASF): add notifications to mailing list. (#1635)
* change(doc): style for HttpResponse section (#1634)
* doc(limit-count): fixed document description does not match source code. (#1628)
close #1627
* bugfix(batch-requests): support cookie (#1599)
* feat(admin api): enhance `PATCH` method, allow to update partial data. (#1609)
* test: added test tests for skywalking. (#1621)
* doc: add skywalking plugin instructions (#1636)
* feature: support http_to_https in redirect plugin. (#1642)
* test: add test case for #1625 to test the filed of overhead (#1645)
* CLI: compatibility of benchmark script and apisix reload command on OSX (#1650)
* feature: support to enable HTTPS for admin API (#1648)
* [log] Optimize the buffer size and flush time (#1570)
* yousali:Optimize the buffer size and flush time
1. buffer=4096 is better for Writes of more than PIPE_BUF bytes may be nonatomic
2. flush=1. Since the log buffer is lowered, the flush time should also be lowered.
* yousali:
hi, I also made a test.
```
4096 Requests/sec: 16079.75
8192 Requests/sec: 16389.52
16384 Requests/sec: 16395.30
32768 Requests/sec: 16459.71
```
I think a log buffer size of 8192 or 16384 would be appropriate.
On the other hand, the refresh time of 3 seconds is still relatively long, and 1 or 3 seconds doesn't particularly affect QPS.
So I also agree with `buffer=16384 flush=1; `
* doc: add 'X-API-KEY' parameter for each interface of Admin API. (#1661)
* bugfix: wildcard certificates cannot match multi-level subdomains in … (#810)
* plugin: add consumer-restriction (#1437)
* feat: support resource name for route, service and upstream object. (#1655)
* [bugfix(CLI)]: check whether the user has enabled etcd v2 protocol. (#1665)
* bugfix(CLI): generate the 'worker_cpu_affinity' config for Linux OS (#1658)
Fix #1657
* test case: formatted by `reindex`. (#1651)
* change: disable reuseport in development mode, it more easy to manage worker process. (#1175)
* test: add test case for route with `filter_func`. (#1683)
* doc: rename grpc-transcoding-cn.md to grpc-transcode-cn.md (#1694)
* fix bug: Execute command 'make run' multiple times, will start multiple processes (#1692)
Fix #1690
* doc(FAQ): added example for gray release. (#1687)
* change: set default reject code for some plugins (#1696)
plugin list:
limit-count
limit-conn
limit-req
* feature: ssl enhance (#1678)
support enable or disable ssl by patch method
support encrypted storage of the SSL private key in etcd
support multi snis
Fix #1668
* feature: support body filter plugin `echo`. (#1632)
* doc: Update README_CN.md (#1705)
* change: use `iterate` to scan items in etcd. (#1717)
related issue: #1685
* doc: added doc of key for limit-* plugins. (#1714)
* feature: support authorization Plugin for Keycloak Identity Server (#1701)
* feat[batch-request]: cp all header to every request (#1697)
* doc: updated main picture. (#1719)
* doc: update echo-cn.md (#1726)
* update `resty-etcd` to version 1.0 . (#1725)
* doc: health-check-cn.md (#1723)
* doc: add Chinese translation of authz-keycloak plugin (#1729)
* doc: Refactoring docs to support docsify (#1724)
* change: update `resty-radixtree` to version 1.9 . (#1730)
* feature: support the use of independent files to implement the load a… (#1732)
* feature: support the use of independent files to implement the load algorithm,
which is convenient for expanding different algorithms in the future.
* feature(echo): support header filter and access phases. (#1708)
* bugfix: id can be string object, which contains `^[a-zA-Z0-9-_]+$`. (#1739)
Fix #1654
* test: add test cases about the string id in `service` #1659 (#1750)
* update `lua-resty-raditree` to ver 2.0 . (#1748)
* refactory: collect `upstream` logic and put them in a single file. (#1734)
feature: support dynamic upstream in plugin.
here is a mini example in `access` phase of plugin:
```lua
local up_conf = {
type = "roundrobin",
nodes = {
{host = conf.upstream.ip, port = conf.upstream.port, weight = 1},
}
}
local ok, err = upstream.check_schema(up_conf)
if not ok then
return 500, err
end
local matched_route = ctx.matched_route
upstream.set(ctx, up_conf.type .. "#route_" .. matched_route.value.id,
ctx.conf_version, up_conf, matched_route)
return
```
* feature: implemented plugin `uri-blocklist` . (#1727)
first step: #1617
* doc: update `http-logger` plugins Chinese docs. (#1755)
* doc: update admin-api docs (#1753)
* doc: add oauth plugins Chinese docs. (#1754)
* bugfix: fixed configures of nginx.conf for security reasons (#1759)
removed working_directory and removed TLSv1 TLSv1.1 from ssl_protocols
* doc: update Chinese README.md (#1758)
* test: use longer ttl, avoid the cached item expired. (#1760)
* doc: updated k8s doc (#1757)
* bugfix: Fix for remote open ID connect introspection (#1743)
fix #1741
* test: added test cases. (#1752)
* bugfix: added `content-type` for admin API responses (#1746)
* feature: support etcd auth (#1769)
Fix #1713 , #1770
* plugin(heartbeat): use `info` log level when failed to report heartbeat. (#1771)
* optimize: Use lru to avoid resolving IP addresses repeatedly . (#1772)
* optimize: Use lru to avoid resolving IP addresses repeatedly .
Cached the global rules to `ctx` .
* optimzie: used a longer time interval for etcd and flush access log.
* optimize: return upstream node directly if the count is 1 .
* optimize: avoid to cache useless variable.
* doc: update Chinese README.md (#1763)
* doc: remove router `r3` . (#1764)
* release: released 1.4-0 version (#1742)
* bugfix(config etcd): when we reset the fetched data, `sync_times` also needs to be reset. (#1785)
* change: remove authentication type for cors plugin (#1788)
fix #1787
* rocks: fixed wrong source of 1.4. (#1783)
* change: 'get_plugin_list' API sorts the return list base on priority (#1779)
* test: format by tool `reindex`. (#1775)
* bugfix: missing argument `premature` because it was called by ngx.timer . (#1796)
* bugfix: return `404 Not Found` when the dashboard folder is empty. (#1799)
close #1794
* doc: add guides for installing dependencies on fedora (#1800)
* doc: fixed some punctuation error in the document sample shell (#1803)
Co-authored-by: Ayeshmantha Perera
Co-authored-by: Vinci Xu <277040271@qq.com>
Co-authored-by: Nirojan Selvanathan
Co-authored-by: YuanSheng Wang
Co-authored-by: Yousa
Co-authored-by: hiproz
Co-authored-by: 罗泽轩
Co-authored-by: Scaat Feng
Co-authored-by: qiujiayu <153163285@qq.com>
Co-authored-by: dengliming
Co-authored-by: dabue <53054094+dabue@users.noreply.github.com>
Co-authored-by: Wen Ming
Co-authored-by: xxm404 <46340314+xxm404@users.noreply.github.com>
Co-authored-by: rhubard <18734141014@163.com>
Co-authored-by: Gerrard-YNWA
Co-authored-by: 月夜枫
Co-authored-by: 仇柯人
Co-authored-by: stone4774 <25053818+stone4774@users.noreply.github.com>
Co-authored-by: 琚致远
Co-authored-by: Kev.Hu
Co-authored-by: QuakeWang <45645138+QuakeWang@users.noreply.github.com>
Co-authored-by: agile6v
Co-authored-by: Corey.Wang
Co-authored-by: hellmage
Co-authored-by: Eric Shi
Co-authored-by: Shenal Silva
Co-authored-by: jackstraw <932698529@qq.com>
Co-authored-by: morrme
Co-authored-by: ko han
Co-authored-by: Joey
Co-authored-by: YuanYingdong <1975643103@qq.com>
---
.asf.yaml | 48 +
.travis.yml | 2 +-
.travis/apisix_cli_test.sh | 77 ++
.../linux_apisix_current_luarocks_runner.sh | 5 +
.../linux_apisix_master_luarocks_runner.sh | 9 +-
.travis/linux_openresty_runner.sh | 13 +-
.travis/linux_tengine_runner.sh | 13 +-
.travis/osx_openresty_runner.sh | 2 +-
CHANGELOG.md | 33 +
CHANGELOG_CN.md | 32 +
CODE_OF_CONDUCT.md | 127 +++
CODE_STYLE.md | 393 --------
FAQ.md | 108 ++-
FAQ_CN.md | 107 ++-
Makefile | 15 +-
README.md | 17 +-
README_CN.md | 75 +-
apisix/admin/global_rules.lua | 37 +-
apisix/admin/plugins.lua | 22 +-
apisix/admin/routes.lua | 50 +-
apisix/admin/services.lua | 39 +-
apisix/admin/ssl.lua | 125 ++-
apisix/admin/stream_routes.lua | 10 +-
apisix/admin/upstreams.lua | 39 +-
apisix/balancer.lua | 250 ++---
apisix/balancer/chash.lua | 80 ++
apisix/balancer/roundrobin.lua | 34 +
apisix/core/config_etcd.lua | 1 +
apisix/core/config_yaml.lua | 5 +-
apisix/core/table.lua | 22 +-
apisix/core/version.lua | 2 +-
apisix/discovery/eureka.lua | 253 +++++
apisix/discovery/init.lua | 33 +
apisix/http/router/radixtree_sni.lua | 78 +-
apisix/http/service.lua | 36 +-
apisix/init.lua | 198 ++--
apisix/plugin.lua | 3 +-
apisix/plugins/authz-keycloak.lua | 165 ++++
apisix/plugins/batch-requests.lua | 57 +-
apisix/plugins/consumer-restriction.lua | 94 ++
apisix/plugins/cors.lua | 98 +-
apisix/plugins/echo.lua | 119 +++
apisix/plugins/example-plugin.lua | 26 +-
apisix/plugins/grpc-transcode/util.lua | 2 +-
apisix/plugins/heartbeat.lua | 2 +-
apisix/plugins/http-logger.lua | 176 ++++
apisix/plugins/ip-restriction.lua | 6 +-
apisix/plugins/kafka-logger.lua | 29 +-
apisix/plugins/limit-conn.lua | 4 +-
apisix/plugins/limit-count.lua | 5 +-
apisix/plugins/limit-req.lua | 4 +-
apisix/plugins/openid-connect.lua | 5 +-
apisix/plugins/prometheus/exporter.lua | 57 +-
apisix/plugins/redirect.lua | 37 +-
apisix/plugins/skywalking.lua | 80 ++
apisix/plugins/skywalking/client.lua | 232 +++++
apisix/plugins/skywalking/tracer.lua | 101 ++
apisix/plugins/syslog.lua | 189 ++++
apisix/plugins/tcp-logger.lua | 28 +-
apisix/plugins/udp-logger.lua | 28 +-
apisix/plugins/uri-blocker.lua | 86 ++
apisix/plugins/zipkin.lua | 2 +-
apisix/router.lua | 48 +-
apisix/schema_def.lua | 74 +-
apisix/stream/plugins/mqtt-proxy.lua | 35 +-
apisix/upstream.lua | 154 +++
apisix/utils/log-util.lua | 18 +-
benchmark/fake-apisix/conf/nginx.conf | 10 +-
benchmark/fake-apisix/lua/apisix.lua | 8 +-
benchmark/run.sh | 7 +-
bin/apisix | 58 +-
conf/cert/apisix_admin_ssl.crt | 33 +
conf/cert/apisix_admin_ssl.key | 51 +
conf/cert/openssl-test2.conf | 40 +
conf/cert/test2.crt | 28 +
conf/cert/test2.key | 39 +
conf/config.yaml | 33 +-
dashboard | 2 +-
doc/README.md | 21 +-
doc/README_CN.md | 68 --
doc/_navbar.md | 22 +
doc/_sidebar.md | 103 ++
doc/admin-api.md | 44 +-
doc/architecture-design.md | 43 +-
doc/benchmark.md | 12 +-
doc/discovery.md | 244 +++++
doc/getting-started.md | 2 +-
doc/grpc-proxy.md | 2 +-
doc/health-check.md | 37 +-
doc/how-to-build.md | 14 +-
doc/https.md | 2 +-
doc/images/apache.png | Bin 0 -> 8491 bytes
doc/images/apisix.png | Bin 202492 -> 277679 bytes
doc/images/discovery-cn.png | Bin 0 -> 42581 bytes
doc/images/discovery.png | Bin 0 -> 46310 bytes
doc/images/plugin/authz-keycloak.png | Bin 0 -> 51957 bytes
doc/images/plugin/skywalking-1.png | Bin 0 -> 22230 bytes
doc/images/plugin/skywalking-2.png | Bin 0 -> 30629 bytes
doc/images/plugin/skywalking-3.png | Bin 0 -> 62575 bytes
doc/images/plugin/skywalking-4.png | Bin 0 -> 30340 bytes
doc/images/plugin/skywalking-5.png | Bin 0 -> 49605 bytes
doc/index.html | 52 ++
doc/install-dependencies.md | 16 +
doc/plugin-develop.md | 8 +-
doc/plugins.md | 2 +-
doc/plugins/authz-keycloak.md | 135 +++
doc/plugins/basic-auth.md | 2 +-
doc/plugins/batch-requests.md | 10 +-
doc/plugins/consumer-restriction.md | 134 +++
doc/plugins/cors.md | 2 +-
doc/plugins/echo.md | 95 ++
doc/plugins/fault-injection.md | 2 +-
...{grpc-transcoding.md => grpc-transcode.md} | 2 +-
doc/plugins/http-logger.md | 100 ++
doc/plugins/ip-restriction.md | 4 +-
doc/plugins/jwt-auth.md | 2 +-
doc/plugins/kafka-logger-cn.md | 129 ---
doc/plugins/kafka-logger.md | 27 +-
doc/plugins/key-auth.md | 2 +-
doc/plugins/limit-conn.md | 8 +-
doc/plugins/limit-count.md | 7 +-
doc/plugins/limit-req.md | 6 +-
doc/plugins/mqtt-proxy.md | 2 +-
doc/plugins/prometheus.md | 2 +-
doc/plugins/proxy-cache.md | 6 +-
doc/plugins/proxy-mirror.md | 6 +-
doc/plugins/proxy-rewrite.md | 2 +-
doc/plugins/redirect.md | 20 +-
doc/plugins/response-rewrite.md | 14 +-
doc/plugins/serverless.md | 2 +-
doc/plugins/skywalking.md | 187 ++++
doc/plugins/syslog.md | 105 +++
doc/plugins/tcp-logger.md | 10 +-
doc/plugins/udp-logger.md | 9 +-
doc/plugins/uri-blocker.md | 96 ++
doc/plugins/wolf-rbac.md | 2 +-
doc/plugins/zipkin.md | 4 +-
doc/stand-alone.md | 2 +-
doc/stream-proxy.md | 2 +-
doc/zh-cn/README.md | 83 ++
doc/zh-cn/_sidebar.md | 103 ++
doc/{admin-api-cn.md => zh-cn/admin-api.md} | 60 +-
.../architecture-design.md} | 60 +-
doc/zh-cn/batch-processor.md | 69 ++
doc/{benchmark-cn.md => zh-cn/benchmark.md} | 4 +-
doc/zh-cn/discovery.md | 253 +++++
.../getting-started.md} | 8 +-
doc/{grpc-proxy-cn.md => zh-cn/grpc-proxy.md} | 4 +-
doc/zh-cn/health-check.md | 103 ++
.../how-to-build.md} | 14 +-
doc/{https-cn.md => zh-cn/https.md} | 2 +-
doc/zh-cn/install-dependencies.md | 153 +++
.../plugin-develop.md} | 8 +-
doc/{plugins-cn.md => zh-cn/plugins.md} | 2 +-
doc/zh-cn/plugins/authz-keycloak-cn.md | 124 +++
.../plugins/basic-auth.md} | 6 +-
.../plugins/batch-requests.md} | 8 +-
doc/zh-cn/plugins/consumer-restriction.md | 128 +++
.../cors-cn.md => zh-cn/plugins/cors.md} | 2 +-
doc/zh-cn/plugins/echo.md | 92 ++
.../plugins/fault-injection.md} | 2 +-
.../plugins/grpc-transcode.md} | 4 +-
doc/zh-cn/plugins/http-logger.md | 97 ++
.../plugins/ip-restriction.md} | 4 +-
.../plugins/jwt-auth.md} | 6 +-
doc/zh-cn/plugins/kafka-logger.md | 127 +++
.../plugins/key-auth.md} | 6 +-
.../plugins/limit-conn.md} | 15 +-
.../plugins/limit-count.md} | 11 +-
.../plugins/limit-req.md} | 18 +-
.../plugins/mqtt-proxy.md} | 2 +-
doc/zh-cn/plugins/oauth.md | 129 +++
.../plugins/prometheus.md} | 16 +-
.../plugins/proxy-cache.md} | 6 +-
.../plugins/proxy-mirror.md} | 6 +-
.../plugins/proxy-rewrite.md} | 2 +-
.../plugins/redirect.md} | 22 +-
.../plugins/response-rewrite.md} | 20 +-
.../plugins/serverless.md} | 2 +-
doc/zh-cn/plugins/skywalking.md | 192 ++++
doc/zh-cn/plugins/syslog.md | 105 +++
.../plugins/tcp-logger.md} | 11 +-
.../plugins/udp-logger.md} | 11 +-
.../plugins/wolf-rbac.md} | 6 +-
.../zipkin-cn.md => zh-cn/plugins/zipkin.md} | 12 +-
doc/{profile-cn.md => zh-cn/profile.md} | 0
.../stand-alone.md} | 2 +-
.../stream-proxy.md} | 4 +-
kubernetes/README.md | 6 +
kubernetes/apisix-gw-config-cm.yaml | 1 +
rockspec/apisix-1.3-0.rockspec | 72 ++
rockspec/apisix-1.4-0.rockspec | 74 ++
rockspec/apisix-master-0.rockspec | 8 +-
t/APISIX.pm | 17 +-
t/admin/balancer.t | 161 ++--
t/admin/global-rules.t | 61 +-
t/admin/plugins.t | 4 +-
t/admin/routes-array-nodes.t | 125 +++
t/admin/routes.t | 184 +++-
t/admin/schema.t | 20 +-
t/admin/services-array-nodes.t | 115 +++
t/admin/services-string-id.t | 884 ++++++++++++++++++
t/admin/services.t | 100 +-
t/admin/ssl.t | 174 +++-
t/admin/stream-routes.t | 86 ++
t/admin/upstream-array-nodes.t | 409 ++++++++
t/admin/upstream.t | 161 +++-
t/apisix.luacov | 1 +
t/core/etcd-auth-fail.t | 56 ++
t/core/etcd-auth.t | 59 ++
t/core/lrucache.t | 2 +-
t/debug/debug-mode.t | 8 +-
t/discovery/eureka.t | 131 +++
t/lib/server.lua | 24 +-
t/lib/test_admin.lua | 44 +-
t/node/filter_func.t | 81 ++
t/node/not-exist-upstream.t | 2 +-
t/node/upstream-array-nodes.t | 215 +++++
t/node/vars.t | 8 +-
t/plugin/authz-keycloak.t | 353 +++++++
t/plugin/batch-requests.t | 106 ++-
t/plugin/consumer-restriction.t | 542 +++++++++++
t/plugin/cors.t | 65 ++
t/plugin/echo.t | 469 ++++++++++
t/plugin/grpc-transcode.t | 50 +-
t/plugin/http-logger.t | 597 ++++++++++++
t/plugin/limit-conn.t | 61 ++
t/plugin/limit-count.t | 52 ++
t/plugin/limit-req.t | 53 ++
t/plugin/prometheus.t | 139 +++
t/plugin/redirect.t | 349 ++++++-
t/plugin/serverless.t | 59 ++
t/plugin/skywalking.t | 328 +++++++
t/plugin/syslog.t | 267 ++++++
t/plugin/uri-blocker.t | 332 +++++++
t/router/radixtree-sni.t | 509 +++++++++-
t/stream-plugin/mqtt-proxy.t | 3 -
237 files changed, 14521 insertions(+), 1723 deletions(-)
create mode 100644 .asf.yaml
create mode 100644 CODE_OF_CONDUCT.md
delete mode 100644 CODE_STYLE.md
create mode 100644 apisix/balancer/chash.lua
create mode 100644 apisix/balancer/roundrobin.lua
create mode 100644 apisix/discovery/eureka.lua
create mode 100644 apisix/discovery/init.lua
create mode 100644 apisix/plugins/authz-keycloak.lua
create mode 100644 apisix/plugins/consumer-restriction.lua
create mode 100644 apisix/plugins/echo.lua
create mode 100644 apisix/plugins/http-logger.lua
create mode 100644 apisix/plugins/skywalking.lua
create mode 100644 apisix/plugins/skywalking/client.lua
create mode 100644 apisix/plugins/skywalking/tracer.lua
create mode 100644 apisix/plugins/syslog.lua
create mode 100644 apisix/plugins/uri-blocker.lua
create mode 100644 apisix/upstream.lua
create mode 100644 conf/cert/apisix_admin_ssl.crt
create mode 100644 conf/cert/apisix_admin_ssl.key
create mode 100644 conf/cert/openssl-test2.conf
create mode 100644 conf/cert/test2.crt
create mode 100644 conf/cert/test2.key
delete mode 100644 doc/README_CN.md
create mode 100644 doc/_navbar.md
create mode 100644 doc/_sidebar.md
create mode 100644 doc/discovery.md
create mode 100644 doc/images/apache.png
create mode 100644 doc/images/discovery-cn.png
create mode 100644 doc/images/discovery.png
create mode 100644 doc/images/plugin/authz-keycloak.png
create mode 100644 doc/images/plugin/skywalking-1.png
create mode 100644 doc/images/plugin/skywalking-2.png
create mode 100644 doc/images/plugin/skywalking-3.png
create mode 100644 doc/images/plugin/skywalking-4.png
create mode 100644 doc/images/plugin/skywalking-5.png
create mode 100644 doc/index.html
create mode 100644 doc/plugins/authz-keycloak.md
create mode 100644 doc/plugins/consumer-restriction.md
create mode 100644 doc/plugins/echo.md
rename doc/plugins/{grpc-transcoding.md => grpc-transcode.md} (98%)
create mode 100644 doc/plugins/http-logger.md
delete mode 100644 doc/plugins/kafka-logger-cn.md
create mode 100644 doc/plugins/skywalking.md
create mode 100644 doc/plugins/syslog.md
create mode 100644 doc/plugins/uri-blocker.md
create mode 100644 doc/zh-cn/README.md
create mode 100644 doc/zh-cn/_sidebar.md
rename doc/{admin-api-cn.md => zh-cn/admin-api.md} (86%)
rename doc/{architecture-design-cn.md => zh-cn/architecture-design.md} (89%)
create mode 100644 doc/zh-cn/batch-processor.md
rename doc/{benchmark-cn.md => zh-cn/benchmark.md} (96%)
create mode 100644 doc/zh-cn/discovery.md
rename doc/{getting-started-cn.md => zh-cn/getting-started.md} (97%)
rename doc/{grpc-proxy-cn.md => zh-cn/grpc-proxy.md} (96%)
create mode 100644 doc/zh-cn/health-check.md
rename doc/{how-to-build-cn.md => zh-cn/how-to-build.md} (93%)
rename doc/{https-cn.md => zh-cn/https.md} (99%)
create mode 100644 doc/zh-cn/install-dependencies.md
rename doc/{plugin-develop-cn.md => zh-cn/plugin-develop.md} (95%)
rename doc/{plugins-cn.md => zh-cn/plugins.md} (97%)
create mode 100644 doc/zh-cn/plugins/authz-keycloak-cn.md
rename doc/{plugins/basic-auth-cn.md => zh-cn/plugins/basic-auth.md} (96%)
rename doc/{plugins/batch-requests-cn.md => zh-cn/plugins/batch-requests.md} (94%)
create mode 100644 doc/zh-cn/plugins/consumer-restriction.md
rename doc/{plugins/cors-cn.md => zh-cn/plugins/cors.md} (99%)
create mode 100644 doc/zh-cn/plugins/echo.md
rename doc/{plugins/fault-injection-cn.md => zh-cn/plugins/fault-injection.md} (98%)
rename doc/{plugins/grpc-transcoding-cn.md => zh-cn/plugins/grpc-transcode.md} (98%)
create mode 100644 doc/zh-cn/plugins/http-logger.md
rename doc/{plugins/ip-restriction-cn.md => zh-cn/plugins/ip-restriction.md} (94%)
rename doc/{plugins/jwt-auth-cn.md => zh-cn/plugins/jwt-auth.md} (97%)
create mode 100644 doc/zh-cn/plugins/kafka-logger.md
rename doc/{plugins/key-auth-cn.md => zh-cn/plugins/key-auth.md} (96%)
rename doc/{plugins/limit-conn-cn.md => zh-cn/plugins/limit-conn.md} (84%)
rename doc/{plugins/limit-count-cn.md => zh-cn/plugins/limit-count.md} (94%)
rename doc/{plugins/limit-req-cn.md => zh-cn/plugins/limit-req.md} (75%)
rename doc/{plugins/mqtt-proxy-cn.md => zh-cn/plugins/mqtt-proxy.md} (98%)
create mode 100644 doc/zh-cn/plugins/oauth.md
rename doc/{plugins/prometheus-cn.md => zh-cn/plugins/prometheus.md} (93%)
rename doc/{plugins/proxy-cache-cn.md => zh-cn/plugins/proxy-cache.md} (94%)
rename doc/{plugins/proxy-mirror-cn.md => zh-cn/plugins/proxy-mirror.md} (89%)
rename doc/{plugins/proxy-rewrite-cn.md => zh-cn/plugins/proxy-rewrite.md} (98%)
rename doc/{plugins/redirect-cn.md => zh-cn/plugins/redirect.md} (74%)
rename doc/{plugins/response-rewrite-cn.md => zh-cn/plugins/response-rewrite.md} (91%)
rename doc/{plugins/serverless-cn.md => zh-cn/plugins/serverless.md} (98%)
create mode 100644 doc/zh-cn/plugins/skywalking.md
create mode 100644 doc/zh-cn/plugins/syslog.md
rename doc/{plugins/tcp-logger-cn.md => zh-cn/plugins/tcp-logger.md} (84%)
rename doc/{plugins/udp-logger-cn.md => zh-cn/plugins/udp-logger.md} (84%)
rename doc/{plugins/wolf-rbac-cn.md => zh-cn/plugins/wolf-rbac.md} (98%)
rename doc/{plugins/zipkin-cn.md => zh-cn/plugins/zipkin.md} (93%)
rename doc/{profile-cn.md => zh-cn/profile.md} (100%)
rename doc/{stand-alone-cn.md => zh-cn/stand-alone.md} (99%)
rename doc/{stream-proxy-cn.md => zh-cn/stream-proxy.md} (96%)
create mode 100644 rockspec/apisix-1.3-0.rockspec
create mode 100644 rockspec/apisix-1.4-0.rockspec
create mode 100644 t/admin/routes-array-nodes.t
create mode 100644 t/admin/services-array-nodes.t
create mode 100644 t/admin/services-string-id.t
create mode 100644 t/admin/upstream-array-nodes.t
create mode 100644 t/core/etcd-auth-fail.t
create mode 100644 t/core/etcd-auth.t
create mode 100644 t/discovery/eureka.t
create mode 100644 t/node/filter_func.t
create mode 100644 t/node/upstream-array-nodes.t
create mode 100644 t/plugin/authz-keycloak.t
create mode 100644 t/plugin/consumer-restriction.t
create mode 100644 t/plugin/echo.t
create mode 100644 t/plugin/http-logger.t
create mode 100644 t/plugin/skywalking.t
create mode 100644 t/plugin/syslog.t
create mode 100644 t/plugin/uri-blocker.t
diff --git a/.asf.yaml b/.asf.yaml
new file mode 100644
index 0000000000000..ad1e99e2a4d56
--- /dev/null
+++ b/.asf.yaml
@@ -0,0 +1,48 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+github:
+ description: The Cloud-Native API Gateway
+ homepage: https://apisix.apache.org/
+ labels:
+ - api-gateway
+ - cloud-native
+ - nginx
+ - lua
+ - luajit
+ - apigateway
+ - microservices
+ - api
+ - loadbalancing
+ - reverse-proxy
+ - api-management
+ - apisix
+ - serverless
+ - iot
+ - devops
+ - kubernetes
+ - docker
+
+ enabled_merge_buttons:
+ squash: true
+ merge: false
+ rebase: false
+
+ notifications:
+ commits: notifications@apisix.apache.org
+ issues: notifications@apisix.apache.org
+ pullrequests: notifications@apisix.apache.org
diff --git a/.travis.yml b/.travis.yml
index d33d27cc2bce7..ddc6b898c1b0a 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,4 +1,4 @@
-dist: xenial
+dist: bionic
sudo: required
matrix:
diff --git a/.travis/apisix_cli_test.sh b/.travis/apisix_cli_test.sh
index d67c7f837b7d2..ca31995c73f25 100755
--- a/.travis/apisix_cli_test.sh
+++ b/.travis/apisix_cli_test.sh
@@ -23,6 +23,8 @@
set -ex
+git checkout conf/config.yaml
+
# check whether the 'reuseport' is in nginx.conf .
make init
@@ -72,3 +74,78 @@ done
sed -i '/dns_resolver:/,+4s/^#//' conf/config.yaml
echo "passed: system nameserver imported"
+
+# enable enable_dev_mode
+sed -i 's/enable_dev_mode: false/enable_dev_mode: true/g' conf/config.yaml
+
+make init
+
+count=`grep -c "worker_processes 1;" conf/nginx.conf`
+if [ $count -ne 1 ]; then
+ echo "failed: worker_processes is not 1 when enable enable_dev_mode"
+ exit 1
+fi
+
+count=`grep -c "listen 9080.*reuseport" conf/nginx.conf || true`
+if [ $count -ne 0 ]; then
+ echo "failed: reuseport should be disabled when enable enable_dev_mode"
+ exit 1
+fi
+
+git checkout conf/config.yaml
+
+# check whether the 'worker_cpu_affinity' is in nginx.conf .
+
+make init
+
+grep -E "worker_cpu_affinity" conf/nginx.conf > /dev/null
+if [ ! $? -eq 0 ]; then
+ echo "failed: nginx.conf file is missing worker_cpu_affinity configuration"
+ exit 1
+fi
+
+echo "passed: nginx.conf file contains worker_cpu_affinity configuration"
+
+# check admin https enabled
+
+sed -i 's/\# port_admin: 9180/port_admin: 9180/' conf/config.yaml
+sed -i 's/\# https_admin: true/https_admin: true/' conf/config.yaml
+
+make init
+
+grep "listen 9180 ssl" conf/nginx.conf > /dev/null
+if [ ! $? -eq 0 ]; then
+ echo "failed: failed to enabled https for admin"
+ exit 1
+fi
+
+make run
+
+code=$(curl -k -i -m 20 -o /dev/null -s -w %{http_code} https://127.0.0.1:9180/apisix/admin/routes -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1')
+if [ ! $code -eq 200 ]; then
+ echo "failed: failed to enabled https for admin"
+ exit 1
+fi
+
+echo "passed: admin https enabled"
+
+# rollback to the default
+
+make stop
+
+sed -i 's/port_admin: 9180/\# port_admin: 9180/' conf/config.yaml
+sed -i 's/https_admin: true/\# https_admin: true/' conf/config.yaml
+
+make init
+
+set +ex
+
+grep "listen 9180 ssl" conf/nginx.conf > /dev/null
+if [ ! $? -eq 1 ]; then
+ echo "failed: failed to rollback to the default admin config"
+ exit 1
+fi
+
+set -ex
+
+echo "passed: rollback to the default admin config"
diff --git a/.travis/linux_apisix_current_luarocks_runner.sh b/.travis/linux_apisix_current_luarocks_runner.sh
index b67e115fa7f53..0264fc5ba826d 100755
--- a/.travis/linux_apisix_current_luarocks_runner.sh
+++ b/.travis/linux_apisix_current_luarocks_runner.sh
@@ -47,6 +47,11 @@ script() {
export PATH=$OPENRESTY_PREFIX/nginx/sbin:$OPENRESTY_PREFIX/luajit/bin:$OPENRESTY_PREFIX/bin:$PATH
openresty -V
sudo service etcd start
+ sudo service etcd stop
+ mkdir -p ~/etcd-data
+ /usr/bin/etcd --listen-client-urls 'http://0.0.0.0:2379' --advertise-client-urls='http://0.0.0.0:2379' --data-dir ~/etcd-data > /dev/null 2>&1 &
+ etcd --version
+ sleep 5
sudo rm -rf /usr/local/apisix
diff --git a/.travis/linux_apisix_master_luarocks_runner.sh b/.travis/linux_apisix_master_luarocks_runner.sh
index 2c76087fa20b8..7705c97559eae 100755
--- a/.travis/linux_apisix_master_luarocks_runner.sh
+++ b/.travis/linux_apisix_master_luarocks_runner.sh
@@ -20,6 +20,7 @@ set -ex
export_or_prefix() {
export OPENRESTY_PREFIX="/usr/local/openresty-debug"
+ export APISIX_MAIN="https://raw.githubusercontent.com/apache/incubator-apisix/master/rockspec/apisix-master-0.rockspec"
}
do_install() {
@@ -46,7 +47,11 @@ script() {
export_or_prefix
export PATH=$OPENRESTY_PREFIX/nginx/sbin:$OPENRESTY_PREFIX/luajit/bin:$OPENRESTY_PREFIX/bin:$PATH
openresty -V
- sudo service etcd start
+ sudo service etcd stop
+ mkdir -p ~/etcd-data
+ /usr/bin/etcd --listen-client-urls 'http://0.0.0.0:2379' --advertise-client-urls='http://0.0.0.0:2379' --data-dir ~/etcd-data > /dev/null 2>&1 &
+ etcd --version
+ sleep 5
sudo rm -rf /usr/local/apisix
@@ -62,7 +67,7 @@ script() {
sudo PATH=$PATH ./utils/install-apisix.sh remove > build.log 2>&1 || (cat build.log && exit 1)
# install APISIX by luarocks
- sudo luarocks install rockspec/apisix-master-0.rockspec > build.log 2>&1 || (cat build.log && exit 1)
+ sudo luarocks install $APISIX_MAIN > build.log 2>&1 || (cat build.log && exit 1)
# show install files
luarocks show apisix
diff --git a/.travis/linux_openresty_runner.sh b/.travis/linux_openresty_runner.sh
index 384d10ec4a824..86505cfce3c62 100755
--- a/.travis/linux_openresty_runner.sh
+++ b/.travis/linux_openresty_runner.sh
@@ -37,12 +37,17 @@ before_install() {
sudo cpanm --notest Test::Nginx >build.log 2>&1 || (cat build.log && exit 1)
docker pull redis:3.0-alpine
docker run --rm -itd -p 6379:6379 --name apisix_redis redis:3.0-alpine
+ docker run --rm -itd -e HTTP_PORT=8888 -e HTTPS_PORT=9999 -p 8888:8888 -p 9999:9999 mendhak/http-https-echo
+ # Runs Keycloak version 10.0.2 with inbuilt policies for unit tests
+ docker run --rm -itd -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=123456 -p 8090:8080 sshniro/keycloak-apisix
# spin up kafka cluster for tests (1 zookeper and 1 kafka instance)
docker pull bitnami/zookeeper:3.6.0
docker pull bitnami/kafka:latest
docker network create kafka-net --driver bridge
docker run --name zookeeper-server -d -p 2181:2181 --network kafka-net -e ALLOW_ANONYMOUS_LOGIN=yes bitnami/zookeeper:3.6.0
docker run --name kafka-server1 -d --network kafka-net -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 -p 9092:9092 -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true bitnami/kafka:latest
+ docker pull bitinit/eureka
+ docker run --name eureka -d -p 8761:8761 --env ENVIRONMENT=apisix --env spring.application.name=apisix-eureka --env server.port=8761 --env eureka.instance.ip-address=127.0.0.1 --env eureka.client.registerWithEureka=true --env eureka.client.fetchRegistry=false --env eureka.client.serviceUrl.defaultZone=http://127.0.0.1:8761/eureka/ bitinit/eureka
sleep 5
docker exec -it kafka-server1 /opt/bitnami/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper-server:2181 --replication-factor 1 --partitions 1 --topic test2
}
@@ -123,7 +128,11 @@ script() {
export_or_prefix
export PATH=$OPENRESTY_PREFIX/nginx/sbin:$OPENRESTY_PREFIX/luajit/bin:$OPENRESTY_PREFIX/bin:$PATH
openresty -V
- sudo service etcd start
+ sudo service etcd stop
+ mkdir -p ~/etcd-data
+ /usr/bin/etcd --listen-client-urls 'http://0.0.0.0:2379' --advertise-client-urls='http://0.0.0.0:2379' --data-dir ~/etcd-data > /dev/null 2>&1 &
+ etcd --version
+ sleep 5
./build-cache/grpc_server_example &
@@ -142,7 +151,7 @@ script() {
sleep 1
make lint && make license-check || exit 1
- APISIX_ENABLE_LUACOV=1 prove -Itest-nginx/lib -r t
+ APISIX_ENABLE_LUACOV=1 PERL5LIB=.:$PERL5LIB prove -Itest-nginx/lib -r t
}
after_success() {
diff --git a/.travis/linux_tengine_runner.sh b/.travis/linux_tengine_runner.sh
index 45a9ec448e298..fb9b6fd657242 100755
--- a/.travis/linux_tengine_runner.sh
+++ b/.travis/linux_tengine_runner.sh
@@ -38,12 +38,17 @@ before_install() {
sudo cpanm --notest Test::Nginx >build.log 2>&1 || (cat build.log && exit 1)
docker pull redis:3.0-alpine
docker run --rm -itd -p 6379:6379 --name apisix_redis redis:3.0-alpine
+ docker run --rm -itd -e HTTP_PORT=8888 -e HTTPS_PORT=9999 -p 8888:8888 -p 9999:9999 mendhak/http-https-echo
+ # Runs Keycloak version 10.0.2 with inbuilt policies for unit tests
+ docker run --rm -itd -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=123456 -p 8090:8080 sshniro/keycloak-apisix
# spin up kafka cluster for tests (1 zookeper and 1 kafka instance)
docker pull bitnami/zookeeper:3.6.0
docker pull bitnami/kafka:latest
docker network create kafka-net --driver bridge
docker run --name zookeeper-server -d -p 2181:2181 --network kafka-net -e ALLOW_ANONYMOUS_LOGIN=yes bitnami/zookeeper:3.6.0
docker run --name kafka-server1 -d --network kafka-net -e ALLOW_PLAINTEXT_LISTENER=yes -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 -e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 -p 9092:9092 -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true bitnami/kafka:latest
+ docker pull bitinit/eureka
+ docker run --name eureka -d -p 8761:8761 --env ENVIRONMENT=apisix --env spring.application.name=apisix-eureka --env server.port=8761 --env eureka.instance.ip-address=127.0.0.1 --env eureka.client.registerWithEureka=true --env eureka.client.fetchRegistry=false --env eureka.client.serviceUrl.defaultZone=http://127.0.0.1:8761/eureka/ bitinit/eureka
sleep 5
docker exec -it kafka-server1 /opt/bitnami/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper-server:2181 --replication-factor 1 --partitions 1 --topic test2
}
@@ -266,7 +271,11 @@ script() {
export_or_prefix
export PATH=$OPENRESTY_PREFIX/nginx/sbin:$OPENRESTY_PREFIX/luajit/bin:$OPENRESTY_PREFIX/bin:$PATH
openresty -V
- sudo service etcd start
+ sudo service etcd stop
+ mkdir -p ~/etcd-data
+ /usr/bin/etcd --listen-client-urls 'http://0.0.0.0:2379' --advertise-client-urls='http://0.0.0.0:2379' --data-dir ~/etcd-data > /dev/null 2>&1 &
+ etcd --version
+ sleep 5
./build-cache/grpc_server_example &
@@ -279,7 +288,7 @@ script() {
./bin/apisix stop
sleep 1
make lint && make license-check || exit 1
- APISIX_ENABLE_LUACOV=1 prove -Itest-nginx/lib -r t
+ APISIX_ENABLE_LUACOV=1 PERL5LIB=.:$PERL5LIB prove -Itest-nginx/lib -r t
}
after_success() {
diff --git a/.travis/osx_openresty_runner.sh b/.travis/osx_openresty_runner.sh
index 1cfce27285859..0f60eb987b406 100755
--- a/.travis/osx_openresty_runner.sh
+++ b/.travis/osx_openresty_runner.sh
@@ -43,7 +43,7 @@ do_install() {
git clone https://github.com/iresty/test-nginx.git test-nginx
wget -P utils https://raw.githubusercontent.com/openresty/openresty-devel-utils/master/lj-releng
- chmod a+x utils/lj-releng
+ chmod a+x utils/lj-releng
wget https://github.com/iresty/grpc_server_example/releases/download/20200314/grpc_server_example-darwin-amd64.tar.gz
tar -xvf grpc_server_example-darwin-amd64.tar.gz
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5e5494df6f292..8770248b0d354 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -19,6 +19,8 @@
# Table of Contents
+- [1.4.0](#140)
+- [1.3.0](#130)
- [1.2.0](#120)
- [1.1.0](#110)
- [1.0.0](#100)
@@ -27,6 +29,37 @@
- [0.7.0](#070)
- [0.6.0](#060)
+## 1.4.0
+
+### Core
+- Admin API: Support unique names for routes [1655](https://github.com/apache/incubator-apisix/pull/1655)
+- Optimization of log buffer size and flush time [1570](https://github.com/apache/incubator-apisix/pull/1570)
+
+### New plugins
+- :sunrise: **Apache Skywalking plugin** [1241](https://github.com/apache/incubator-apisix/pull/1241)
+- :sunrise: **Keycloak Identity Server Plugin** [1701](https://github.com/apache/incubator-apisix/pull/1701)
+- :sunrise: **Echo Plugin** [1632](https://github.com/apache/incubator-apisix/pull/1632)
+- :sunrise: **Consume Restriction Plugin** [1437](https://github.com/apache/incubator-apisix/pull/1437)
+
+### Improvements
+- Batch Request : Copy all headers to every request [1697](https://github.com/apache/incubator-apisix/pull/1697)
+- SSL private key encryption [1678](https://github.com/apache/incubator-apisix/pull/1678)
+- Improvement of docs for multiple plugins
+
+
+## 1.3.0
+
+The 1.3 version is mainly for security update.
+
+### Security
+- reject invalid header[#1462](https://github.com/apache/incubator-apisix/pull/1462) and uri safe encode[#1461](https://github.com/apache/incubator-apisix/pull/1461)
+- only allow 127.0.0.1 access admin API and dashboard by default. [#1458](https://github.com/apache/incubator-apisix/pull/1458)
+
+### Plugin
+- :sunrise: **add batch request plugin**. [#1388](https://github.com/apache/incubator-apisix/pull/1388)
+- implemented plugin `sys logger`. [#1414](https://github.com/apache/incubator-apisix/pull/1414)
+
+
## 1.2.0
The 1.2 version brings many new features, including core and plugins.
diff --git a/CHANGELOG_CN.md b/CHANGELOG_CN.md
index 8e19e84721ea4..d64e260295b84 100644
--- a/CHANGELOG_CN.md
+++ b/CHANGELOG_CN.md
@@ -19,6 +19,8 @@
# Table of Contents
+- [1.4.0](#140)
+- [1.3.0](#130)
- [1.2.0](#120)
- [1.1.0](#110)
- [1.0.0](#100)
@@ -27,6 +29,36 @@
- [0.7.0](#070)
- [0.6.0](#060)
+## 1.4.0
+
+### Core
+- Admin API: 路由支持唯一 name 字段 [1655](https://github.com/apache/incubator-apisix/pull/1655)
+- 优化 log 缓冲区大小和刷新时间 [1570](https://github.com/apache/incubator-apisix/pull/1570)
+
+### New plugins
+- :sunrise: **Apache Skywalking plugin** [1241](https://github.com/apache/incubator-apisix/pull/1241)
+- :sunrise: **Keycloak Identity Server Plugin** [1701](https://github.com/apache/incubator-apisix/pull/1701)
+- :sunrise: **Echo Plugin** [1632](https://github.com/apache/incubator-apisix/pull/1632)
+- :sunrise: **Consume Restriction Plugin** [1437](https://github.com/apache/incubator-apisix/pull/1437)
+
+### Improvements
+- Batch Request : 对每个请求拷贝头 [1697](https://github.com/apache/incubator-apisix/pull/1697)
+- SSL 私钥加密 [1678](https://github.com/apache/incubator-apisix/pull/1678)
+- 众多插件文档改善
+
+## 1.3.0
+
+1.3 版本主要带来安全更新。
+
+## Security
+- 拒绝无效的 header [#1462](https://github.com/apache/incubator-apisix/pull/1462) 并对 uri 进行安全编码 [#1461](https://github.com/apache/incubator-apisix/pull/1461)
+- 默认只允许本地环回地址 127.0.0.1 访问 admin API 和 dashboard. [#1458](https://github.com/apache/incubator-apisix/pull/1458)
+
+### Plugin
+- :sunrise: **新增 batch request 插件**. [#1388](https://github.com/apache/incubator-apisix/pull/1388)
+- 实现完成 `sys logger` 插件. [#1414](https://github.com/apache/incubator-apisix/pull/1414)
+
+
## 1.2.0
1.2 版本在内核以及插件上带来了非常多的更新。
diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000000000..732f5ae2eb464
--- /dev/null
+++ b/CODE_OF_CONDUCT.md
@@ -0,0 +1,127 @@
+
+
+*The following is copied for your convenience from . If there's a discrepancy between the two, let us know or submit a PR to fix it.*
+
+# Code of Conduct #
+
+## Introduction ##
+
+This code of conduct applies to all spaces managed by the Apache
+Software Foundation, including IRC, all public and private mailing
+lists, issue trackers, wikis, blogs, Twitter, and any other
+communication channel used by our communities. A code of conduct which
+is specific to in-person events (ie., conferences) is codified in the
+published ASF anti-harassment policy.
+
+We expect this code of conduct to be honored by everyone who
+participates in the Apache community formally or informally, or claims
+any affiliation with the Foundation, in any Foundation-related
+activities and especially when representing the ASF, in any role.
+
+This code __is not exhaustive or complete__. It serves to distill our
+common understanding of a collaborative, shared environment and goals.
+We expect it to be followed in spirit as much as in the letter, so that
+it can enrich all of us and the technical communities in which we participate.
+
+## Specific Guidelines ##
+
+We strive to:
+
+
+1. __Be open.__ We invite anyone to participate in our community. We preferably use public methods of communication for project-related messages, unless discussing something sensitive. This applies to messages for help or project-related support, too; not only is a public support request much more likely to result in an answer to a question, it also makes sure that any inadvertent mistakes made by people answering will be more easily detected and corrected.
+
+2. __Be `empathetic`, welcoming, friendly, and patient.__ We work together to resolve conflict, assume good intentions, and do our best to act in an empathetic fashion. We may all experience some frustration from time to time, but we do not allow frustration to turn into a personal attack. A community where people feel uncomfortable or threatened is not a productive one. We should be respectful when dealing with other community members as well as with people outside our community.
+
+3. __Be collaborative.__ Our work will be used by other people, and in turn we will depend on the work of others. When we make something for the benefit of the project, we are willing to explain to others how it works, so that they can build on the work to make it even better. Any decision we make will affect users and colleagues, and we take those consequences seriously when making decisions.
+
+4. __Be inquisitive.__ Nobody knows everything! Asking questions early avoids many problems later, so questions are encouraged, though they may be directed to the appropriate forum. Those who are asked should be responsive and helpful, within the context of our shared goal of improving Apache project code.
+
+5. __Be careful in the words that we choose.__ Whether we are participating as professionals or volunteers, we value professionalism in all interactions, and take responsibility for our own speech. Be kind to others. Do not insult or put down other participants. Harassment and other exclusionary behaviour are not acceptable. This includes, but is not limited to:
+
+ * Violent threats or language directed against another person.
+ * Sexist, racist, or otherwise discriminatory jokes and language.
+ * Posting sexually explicit or violent material.
+ * Posting (or threatening to post) other people's personally identifying information ("doxing").
+ * Sharing private content, such as emails sent privately or non-publicly, or unlogged forums such as IRC channel history.
+ * Personal insults, especially those using racist or sexist terms.
+ * Unwelcome sexual attention.
+ * Excessive or unnecessary profanity.
+ * Repeated harassment of others. In general, if someone asks you to stop, then stop.
+ * Advocating for, or encouraging, any of the above behaviour.
+
+6. __Be concise.__ Keep in mind that what you write once will be read by hundreds of persons. Writing a short email means people can understand the conversation as efficiently as possible. Short emails should always strive to be empathetic, welcoming, friendly and patient. When a long explanation is necessary, consider adding a summary.
+
+ Try to bring new ideas to a conversation so that each mail adds something unique to the thread, keeping in mind that the rest of the thread still contains the other messages with arguments that have already been made.
+
+ Try to stay on topic, especially in discussions that are already fairly large.
+
+7. __Step down considerately.__ Members of every project come and go. When somebody leaves or disengages from the project they should tell people they are leaving and take the proper steps to ensure that others can pick up where they left off. In doing so, they should remain respectful of those who continue to participate in the project and should not misrepresent the project's goals or achievements. Likewise, community members should respect any individual's choice to leave the project.
+
+
+## Diversity Statement ##
+
+Apache welcomes and encourages participation by everyone. We are committed to being a community that everyone feels good about joining. Although we may not be able to satisfy everyone, we will always work to treat everyone well.
+
+No matter how you identify yourself or how others perceive you: we welcome you. Though no list can hope to be comprehensive, we explicitly honour diversity in: age, culture, ethnicity, genotype, gender identity or expression, language, national origin, neurotype, phenotype, political beliefs, profession, race, religion, sexual orientation, socioeconomic status, subculture and technical ability.
+
+Though we welcome people fluent in all languages, Apache development is conducted in English.
+
+Standards for behaviour in the Apache community are detailed in the Code of Conduct above. We expect participants in our community to meet these standards in all their interactions and to help others to do so as well.
+
+## Reporting Guidelines ##
+
+While this code of conduct should be adhered to by participants, we recognize that sometimes people may have a bad day, or be unaware of some of the guidelines in this code of conduct. When that happens, you may reply to them and point out this code of conduct. Such messages may be in public or in private, whatever is most appropriate. However, regardless of whether the message is public or not, it should still adhere to the relevant parts of this code of conduct; in particular, it should not be abusive or disrespectful.
+
+If you believe someone is violating this code of conduct, you may reply to
+them and point out this code of conduct. Such messages may be in public or in
+private, whatever is most appropriate. Assume good faith; it is more likely
+that participants are unaware of their bad behaviour than that they
+intentionally try to degrade the quality of the discussion. Should there be
+difficulties in dealing with the situation, you may report your compliance
+issues in confidence to either:
+
+ * President of the Apache Software Foundation: Sam Ruby (rubys at intertwingly dot net)
+
+or one of our volunteers:
+
+ * [Mark Thomas](http://home.apache.org/~markt/coc.html)
+ * [Joan Touzet](http://home.apache.org/~wohali/)
+ * [Sharan Foga](http://home.apache.org/~sharan/coc.html)
+
+If the violation is in documentation or code, for example inappropriate pronoun usage or word choice within official documentation, we ask that people report these privately to the project in question at private@project.apache.org, and, if they have sufficient ability within the project, to resolve or remove the concerning material, being mindful of the perspective of the person originally reporting the issue.
+
+
+## End Notes ##
+
+This Code defines __empathy__ as "a vicarious participation in the emotions, ideas, or opinions of others; the ability to imagine oneself in the condition or predicament of another." __Empathetic__ is the adjectival form of empathy.
+
+
+This statement thanks the following, on which it draws for content and inspiration:
+
+
+ * [CouchDB Project Code of conduct](http://couchdb.apache.org/conduct.html)
+ * [Fedora Project Code of Conduct](http://fedoraproject.org/code-of-conduct)
+ * [Speak Up! Code of Conduct](http://speakup.io/coc.html)
+ * [Django Code of Conduct](https://www.djangoproject.com/conduct/)
+ * [Debian Code of Conduct](http://www.debian.org/vote/2014/vote_002)
+ * [Twitter Open Source Code of Conduct](https://github.com/twitter/code-of-conduct/blob/master/code-of-conduct.md)
+ * [Mozilla Code of Conduct/Draft](https://wiki.mozilla.org/Code_of_Conduct/Draft#Conflicts_of_Interest)
+ * [Python Diversity Appendix](https://www.python.org/community/diversity/)
+ * [Python Mentors Home Page](http://pythonmentors.com/)
diff --git a/CODE_STYLE.md b/CODE_STYLE.md
deleted file mode 100644
index 8b7a4ca2ef659..0000000000000
--- a/CODE_STYLE.md
+++ /dev/null
@@ -1,393 +0,0 @@
-
-
-# OpenResty Lua Coding Style Guide
-
-## indentation
-Use 4 spaces as an indent in OpenResty, although Lua does not have such a grammar requirement.
-
-```
---No
-if a then
-ngx.say("hello")
-end
-```
-
-```
---yes
-if a then
- ngx.say("hello")
-end
-```
-
-You can simplify the operation by changing the tab to 4 spaces in the editor you are using.
-
-## Space
-On both sides of the operator, you need to use a space to separate:
-
-```
---No
-local i=1
-local s = "apisix"
-```
-
-```
---Yes
-local i = 1
-local s = "apisix"
-```
-
-## Blank line
-Many developers will bring the development habits of other languages to OpenResty, such as adding a semicolon at the end of the line.
-
-```
---No
-if a then
- ngx.say("hello");
-end;
-```
-
-Adding a semicolon will make the Lua code look ugly and unnecessary. Also, don't want to save the number of lines in the code, the latter turns the multi-line code into one line in order to appear "simple". This will not know when the positioning error is in the end of the code:
-
-```
---No
-if a then ngx.say("hello") end
-```
-
-```
---yes
-if a then
- ngx.say("hello")
-end
-```
-
-The functions needs to be separated by two blank lines:
-```
---No
-local function foo()
-end
-local function bar()
-end
-```
-
-```
---Yes
-local function foo()
-end
-
-
-local function bar()
-end
-```
-
-If there are multiple if elseif branches, they need a blank line to separate them:
-```
---No
-if a == 1 then
- foo()
-elseif a== 2 then
- bar()
-elseif a == 3 then
- run()
-else
- error()
-end
-```
-
-```
---Yes
-if a == 1 then
- foo()
-
-elseif a== 2 then
- bar()
-
-elseif a == 3 then
- run()
-
-else
- error()
-end
-```
-
-## Maximum length per line
-Each line cannot exceed 80 characters. If it exceeds, you need to wrap and align:
-
-```
---No
-return limit_conn_new("plugin-limit-conn", conf.conn, conf.burst, conf.default_conn_delay)
-```
-
-```
---Yes
-return limit_conn_new("plugin-limit-conn", conf.conn, conf.burst,
- conf.default_conn_delay)
-```
-
-When the linefeed is aligned, the correspondence between the upper and lower lines should be reflected. For the example above, the parameters of the second line of functions are to the right of the left parenthesis of the first line.
-
-If it is a string stitching alignment, you need to put `..` in the next line:
-```
---No
-return limit_conn_new("plugin-limit-conn" .. "plugin-limit-conn" ..
- "plugin-limit-conn")
-```
-
-```
---Yes
-return limit_conn_new("plugin-limit-conn" .. "plugin-limit-conn"
- .. "plugin-limit-conn")
-```
-
-```
---Yes
-return "param1", "plugin-limit-conn"
- .. "plugin-limit-conn")
-```
-
-## Variable
-Local variables should always be used, not global variables:
-```
---No
-i = 1
-s = "apisix"
-```
-
-```
---Yes
-local i = 1
-local s = "apisix"
-```
-
-Variable naming uses the `snake_case` style:
-```
---No
-local IndexArr = 1
-local str_Name = "apisix"
-```
-
-```
---Yes
-local index_arr = 1
-local str_name = "apisix"
-```
-
-Use all capitalization for constants:
-```
---No
-local max_int = 65535
-local server_name = "apisix"
-```
-
-```
---Yes
-local MAX_INT = 65535
-local SERVER_NAME = "apisix"
-```
-
-## Table
-Use `table.new` to pre-allocate the table:
-```
---No
-local t = {}
-for i = 1, 100 do
- t[i] = i
-end
-```
-
-```
---Yes
-local new_tab = require "table.new"
-local t = new_tab(100, 0)
-for i = 1, 100 do
- t[i] = i
-end
-```
-
-Don't use `nil` in an array:
-```
---No
-local t = {1, 2, nil, 3}
-```
-
-If you must use null values, use `ngx.null` to indicate:
-```
---Yes
-local t = {1, 2, ngx.null, 3}
-```
-
-## String
-Do not splicing strings on the hot code path:
-```
---No
-local s = ""
-for i = 1, 100000 do
- s = s .. "a"
-end
-```
-
-```
---Yes
-local t = {}
-for i = 1, 100000 do
- t[i] = "a"
-end
-local s = table.concat(t, "")
-```
-
-## Function
-The naming of functions also follows `snake_case`:
-```
---No
-local function testNginx()
-end
-```
-
-```
---Yes
-local function test_nginx()
-end
-```
-
-The function should return as early as possible:
-```
---No
-local function check(age, name)
- local ret = true
- if age < 20 then
- ret = false
- end
-
- if name == "a" then
- ret = false
- end
- -- do something else
- return ret
-end
-```
-
-```
---Yes
-local function check(age, name)
- if age < 20 then
- return false
- end
-
- if name == "a" then
- return false
- end
- -- do something else
- return true
-end
-```
-
-## Module
-All require libraries must be localized:
-```
---No
-local function foo()
- local ok, err = ngx.timer.at(delay, handler)
-end
-```
-
-```
---Yes
-local timer_at = ngx.timer.at
-
-local function foo()
- local ok, err = timer_at(delay, handler)
-end
-```
-
-For style unification, `require` and `ngx` also need to be localized:
-```
---No
-local core = require("apisix.core")
-local timer_at = ngx.timer.at
-
-local function foo()
- local ok, err = timer_at(delay, handler)
-end
-```
-
-```
---Yes
-local ngx = ngx
-local require = require
-local core = require("apisix.core")
-local timer_at = ngx.timer.at
-
-local function foo()
- local ok, err = timer_at(delay, handler)
-end
-```
-
-## Error handling
-For functions that return with error information, the error information must be judged and processed:
-```
---No
-local sock = ngx.socket.tcp()
-local ok = sock:connect("www.google.com", 80)
-ngx.say("successfully connected to google!")
-```
-
-```
---Yes
-local sock = ngx.socket.tcp()
-local ok, err = sock:connect("www.google.com", 80)
-if not ok then
- ngx.say("failed to connect to google: ", err)
- return
-end
-ngx.say("successfully connected to google!")
-```
-
-The function you wrote yourself, the error message is to be returned as a second parameter in the form of a string:
-```
---No
-local function foo()
- local ok, err = func()
- if not ok then
- return false
- end
- return true
-end
-```
-
-```
---No
-local function foo()
- local ok, err = func()
- if not ok then
- return false, {msg = err}
- end
- return true
-end
-```
-
-```
---Yes
-local function foo()
- local ok, err = func()
- if not ok then
- return false, "failed to call func(): " .. err
- end
- return true
-end
-```
diff --git a/FAQ.md b/FAQ.md
index 4b56fce0355d0..dbf365e1f5c0f 100644
--- a/FAQ.md
+++ b/FAQ.md
@@ -71,14 +71,14 @@ Run the `luarocks config rocks_servers` command(this command is supported after
If using a proxy doesn't solve this problem, you can add `--verbose` option during installation to see exactly how slow it is. Excluding the first case, only the second that the `git` protocol is blocked. Then we can run `git config --global url."https://".insteadOf git://` to using the 'HTTPS' protocol instead of `git`.
-## How to support A/B testing via APISIX?
+## How to support gray release via APISIX?
-An example, if you want to group by the request param `arg_id`:
+An example, `foo.com/product/index.html?id=204&page=2`, gray release based on `id` in the query string in uri as a condition:
-1. Group A:arg_id <= 1000
-2. Group B:arg_id > 1000
+1. Group A:id <= 1000
+2. Group B:id > 1000
-here is the way:
+here is the way:
```shell
curl -i http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
@@ -107,11 +107,95 @@ curl -i http://127.0.0.1:9080/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335
}'
```
+
Here is the operator list of current `lua-resty-radixtree`:
https://github.com/iresty/lua-resty-radixtree#operator-list
+## How to redirect http to https via APISIX?
+
+An example, redirect `http://foo.com` to `https://foo.com`
+
+There are several different ways to do this.
+1. Directly use the `http_to_https` in `redirect` plugin:
+```shell
+curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "uri": "/hello",
+ "host": "foo.com",
+ "plugins": {
+ "redirect": {
+ "http_to_https": true
+ }
+ }
+}'
+```
+
+2. Use with advanced routing rule `vars` with `redirect` plugin:
+
+```shell
+curl -i http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "uri": "/hello",
+ "host": "foo.com",
+ "vars": [
+ [
+ "scheme",
+ "==",
+ "http"
+ ]
+ ],
+ "plugins": {
+ "redirect": {
+ "uri": "https://$host$request_uri",
+ "ret_code": 301
+ }
+ }
+}'
+```
+
+3. `serverless` plugin:
+
+```shell
+curl -i http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "uri": "/hello",
+ "plugins": {
+ "serverless-pre-function": {
+ "phase": "rewrite",
+ "functions": ["return function() if ngx.var.scheme == \"http\" and ngx.var.host == \"foo.com\" then ngx.header[\"Location\"] = \"https://foo.com\" .. ngx.var.request_uri; ngx.exit(ngx.HTTP_MOVED_PERMANENTLY); end; end"]
+ }
+ }
+}'
+```
+
+Then test it to see if it works:
+```shell
+curl -i -H 'Host: foo.com' http://127.0.0.1:9080/hello
+```
+
+The response body should be:
+```
+HTTP/1.1 301 Moved Permanently
+Date: Mon, 18 May 2020 02:56:04 GMT
+Content-Type: text/html
+Content-Length: 166
+Connection: keep-alive
+Location: https://foo.com/hello
+Server: APISIX web server
+
+
+301 Moved Permanently
+
+301 Moved Permanently
+
openresty
+
+
+```
+
+
## How to fix OpenResty Installation Failure on MacOS 10.15
When you install the OpenResty on MacOs 10.15, you may face this error
+
```shell
> brew install openresty
Updating Homebrew...
@@ -172,3 +256,17 @@ Steps:
2. Restart APISIX
Now you can trace the info level log in logs/error.log.
+
+## How to reload your own plugin
+
+The Apache APISIX plugin supports hot reloading. If your APISIX node has the Admin API turned on, then for scenarios such as adding / deleting / modifying plugins, you can hot reload the plugin by calling the HTTP interface without restarting the service.
+
+```shell
+curl http://127.0.0.1:9080/apisix/admin/plugins/reload -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT
+```
+
+If your APISIX node does not open the Admin API, then you can manually load the plug-in by reloading APISIX.
+
+```shell
+apisix reload
+```
diff --git a/FAQ_CN.md b/FAQ_CN.md
index 8d1e52824b514..334fc693c7397 100644
--- a/FAQ_CN.md
+++ b/FAQ_CN.md
@@ -45,7 +45,7 @@ APISIX 是当前性能最好的 API 网关,单核 QPS 达到 2.3 万,平均
当然可以,APISIX 提供了灵活的自定义插件,方便开发者和企业编写自己的逻辑。
-[如何开发插件](doc/plugin-develop-cn.md)
+[如何开发插件](doc/zh-cn/plugin-develop.md)
## 我们为什么选择 etcd 作为配置中心?
@@ -73,14 +73,15 @@ luarocks 服务。 运行 `luarocks config rocks_servers` 命令(这个命令
如果使用代理仍然解决不了这个问题,那可以在安装的过程中添加 `--verbose` 选项来查看具体是慢在什么地方。排除前面的
第一种情况,只可能是第二种,`git` 协议被封。这个时候可以执行 `git config --global url."https://".insteadOf git://` 命令使用 `https` 协议替代。
-## 如何通过APISIX支持A/B测试?
+## 如何通过 APISIX 支持灰度发布?
-比如,根据入参`arg_id`分组:
+比如,`foo.com/product/index.html?id=204&page=2`, 根据 uri 中 query string 中的 `id` 作为条件来灰度发布:
-1. A组:arg_id <= 1000
-2. B组:arg_id > 1000
+1. A组:id <= 1000
+2. B组:id > 1000
可以这么做:
+
```shell
curl -i http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
@@ -109,9 +110,91 @@ curl -i http://127.0.0.1:9080/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335
}'
```
+
更多的 lua-resty-radixtree 匹配操作,可查看操作列表:
https://github.com/iresty/lua-resty-radixtree#operator-list
+## 如何支持 http 自动跳转到 https?
+
+比如,将 `http://foo.com` 重定向到 `https://foo.com`
+
+有几种不同的方法来实现:
+1. 直接使用 `redirect` 插件的 `http_to_https` 功能:
+```shell
+curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "uri": "/hello",
+ "host": "foo.com",
+ "plugins": {
+ "redirect": {
+ "http_to_https": true
+ }
+ }
+}'
+```
+
+2. 结合高级路由规则 `vars` 和 `redirect` 插件一起使用:
+
+```shell
+curl -i http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "uri": "/hello",
+ "host": "foo.com",
+ "vars": [
+ [
+ "scheme",
+ "==",
+ "http"
+ ]
+ ],
+ "plugins": {
+ "redirect": {
+ "uri": "https://$host$request_uri",
+ "ret_code": 301
+ }
+ }
+}'
+```
+
+3. 使用`serverless`插件:
+
+```shell
+curl -i http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "uri": "/hello",
+ "plugins": {
+ "serverless-pre-function": {
+ "phase": "rewrite",
+ "functions": ["return function() if ngx.var.scheme == \"http\" and ngx.var.host == \"foo.com\" then ngx.header[\"Location\"] = \"https://foo.com\" .. ngx.var.request_uri; ngx.exit(ngx.HTTP_MOVED_PERMANENTLY); end; end"]
+ }
+ }
+}'
+```
+
+然后测试下是否生效:
+```shell
+curl -i -H 'Host: foo.com' http://127.0.0.1:9080/hello
+```
+
+响应体应该是:
+```
+HTTP/1.1 301 Moved Permanently
+Date: Mon, 18 May 2020 02:56:04 GMT
+Content-Type: text/html
+Content-Length: 166
+Connection: keep-alive
+Location: https://foo.com/hello
+Server: APISIX web server
+
+
+301 Moved Permanently
+
+301 Moved Permanently
+
openresty
+
+
+```
+
## 如何修改日志等级
默认的APISIX日志等级为`warn`,如果需要查看`core.log.info`的打印结果需要将日志等级调整为`info`。
@@ -123,3 +206,17 @@ https://github.com/iresty/lua-resty-radixtree#operator-list
2、重启APISIX
之后便可以在logs/error.log中查看到info的日志了。
+
+## 如何加载自己编写的插件
+
+Apache APISIX 的插件支持热加载,如果你的 APISIX 节点打开了 Admin API,那么对于新增/删除/修改插件等场景,均可以通过调用 HTTP 接口的方式热加载插件,不需要重启服务。
+
+```shell
+curl http://127.0.0.1:9080/apisix/admin/plugins/reload -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT
+```
+
+如果你的 APISIX 节点并没有打开 Admin API,那么你可以通过手动 reload APISIX 的方式加载插件。
+
+```shell
+apisix reload
+```
diff --git a/Makefile b/Makefile
index 1ecc3074276a8..5836e3690988a 100644
--- a/Makefile
+++ b/Makefile
@@ -79,9 +79,13 @@ init: default
### run: Start the apisix server
.PHONY: run
run: default
+ifeq ("$(wildcard logs/nginx.pid)", "")
mkdir -p logs
mkdir -p /tmp/apisix_cores/
$(OR_EXEC) -p $$PWD/ -c $$PWD/conf/nginx.conf
+else
+ @echo "APISIX is running..."
+endif
### stop: Stop the apisix server
@@ -110,7 +114,7 @@ reload: default
### install: Install the apisix
.PHONY: install
-install:
+install: default
$(INSTALL) -d /usr/local/apisix/
$(INSTALL) -d /usr/local/apisix/logs/
$(INSTALL) -d /usr/local/apisix/conf/cert
@@ -124,6 +128,9 @@ install:
$(INSTALL) -d $(INST_LUADIR)/apisix/admin
$(INSTALL) apisix/admin/*.lua $(INST_LUADIR)/apisix/admin/
+ $(INSTALL) -d $(INST_LUADIR)/apisix/balancer
+ $(INSTALL) apisix/balancer/*.lua $(INST_LUADIR)/apisix/balancer/
+
$(INSTALL) -d $(INST_LUADIR)/apisix/core
$(INSTALL) apisix/core/*.lua $(INST_LUADIR)/apisix/core/
@@ -133,6 +140,9 @@ install:
$(INSTALL) -d $(INST_LUADIR)/apisix/http/router
$(INSTALL) apisix/http/router/*.lua $(INST_LUADIR)/apisix/http/router/
+ $(INSTALL) -d $(INST_LUADIR)/apisix/discovery
+ $(INSTALL) apisix/discovery/*.lua $(INST_LUADIR)/apisix/discovery/
+
$(INSTALL) -d $(INST_LUADIR)/apisix/plugins
$(INSTALL) apisix/plugins/*.lua $(INST_LUADIR)/apisix/plugins/
@@ -148,6 +158,9 @@ install:
$(INSTALL) -d $(INST_LUADIR)/apisix/plugins/zipkin
$(INSTALL) apisix/plugins/zipkin/*.lua $(INST_LUADIR)/apisix/plugins/zipkin/
+ $(INSTALL) -d $(INST_LUADIR)/apisix/plugins/skywalking
+ $(INSTALL) apisix/plugins/skywalking/*.lua $(INST_LUADIR)/apisix/plugins/skywalking/
+
$(INSTALL) -d $(INST_LUADIR)/apisix/stream/plugins
$(INSTALL) apisix/stream/plugins/*.lua $(INST_LUADIR)/apisix/stream/plugins/
diff --git a/README.md b/README.md
index a0102b429f025..3572484d373be 100644
--- a/README.md
+++ b/README.md
@@ -39,8 +39,6 @@ APISIX is a cloud-based microservices API gateway that handles traditional north
APISIX provides dynamic load balancing, authentication, rate limiting, other plugins through plugin mechanisms, and supports plugins you develop yourself.
-For more detailed information, see the [White Paper](https://www.iresty.com/download/Choosing%20the%20Right%20Microservice%20API%20Gateway%20for%20the%20Enterprise%20User.pdf).
-
![](doc/images/apisix.png)
## Features
@@ -50,7 +48,7 @@ A/B testing, canary release, blue-green deployment, limit rate, defense against
- **All platforms**
- Cloud-Native: Platform agnostic, No vendor lock-in, APISIX can run from bare-metal to Kubernetes.
- Run Environment: Both OpenResty and Tengine are supported.
- - Supports [ARM64](https://zhuanlan.zhihu.com/p/84467919): Don't worry about the lock-in of the infra technology.
+ - Supports ARM64: Don't worry about the lock-in of the infra technology.
- **Multi protocols**
- [TCP/UDP Proxy](doc/stream-proxy.md): Dynamic TCP/UDP proxy.
@@ -72,6 +70,7 @@ A/B testing, canary release, blue-green deployment, limit rate, defense against
- Hash-based Load Balancing: Load balance with consistent hashing sessions.
- [Health Checks](doc/health-check.md): Enable health check on the upstream node, and will automatically filter unhealthy nodes during load balancing to ensure system stability.
- Circuit-Breaker: Intelligent tracking of unhealthy upstream services.
+ - [Dynamic service discovery](doc/discovery.md):Support service discovery based on registry, reduce the reverse proxy maintenance costs.
- **Fine-grained routing**
- [Supports full path matching and prefix matching](doc/router-radixtree.md#how-to-use-libradixtree-in-apisix)
@@ -79,7 +78,7 @@ A/B testing, canary release, blue-green deployment, limit rate, defense against
- Support [various operators as judgment conditions for routing](https://github.com/iresty/lua-resty-radixtree#operator-list), for example `{"arg_age", ">", 24}`
- Support [custom route matching function](https://github.com/iresty/lua-resty-radixtree/blob/master/t/filter-fun.t#L10)
- IPv6: Use IPv6 to match route.
- - Support [TTL](doc/admin-api-cn.md#route)
+ - Support [TTL](doc/zh-cn/admin-api.md#route)
- [Support priority](doc/router-radixtree.md#3-match-priority)
- [Support Batch Http Requests](doc/plugins/batch-requests.md)
@@ -91,10 +90,11 @@ A/B testing, canary release, blue-green deployment, limit rate, defense against
- [Limit-count](doc/plugins/limit-count.md)
- [Limit-concurrency](doc/plugins/limit-conn.md)
- Anti-ReDoS(Regular expression Denial of Service): Built-in policies to Anti ReDoS without configuration.
- - [CORS](doc/plugins/cors.md)
+ - [CORS](doc/plugins/cors.md) Enable CORS(Cross-origin resource sharing) for your API.
+ - [uri-blocker](plugins/uri-blocker.md): Block client request by URI.
- **OPS friendly**
- - OpenTracing: [support Apache Skywalking and Zipkin](doc/plugins/zipkin.md)
+ - OpenTracing: support [Apache Skywalking](doc/plugins/skywalking.md) and [Zipkin](doc/plugins/zipkin.md)
- Monitoring And Metrics: [Prometheus](doc/plugins/prometheus.md)
- Clustering: APISIX nodes are stateless, creates clustering of the configuration center, please refer to [etcd Clustering Guide](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/clustering.md).
- High availability: support to configure multiple etcd addresses in the same cluster.
@@ -106,7 +106,6 @@ A/B testing, canary release, blue-green deployment, limit rate, defense against
- High performance: The single-core QPS reaches 18k with an average delay of less than 0.2 milliseconds.
- [Fault Injection](doc/plugins/fault-injection.md)
- [REST Admin API](doc/admin-api.md): Using the REST Admin API to control Apache APISIX, which only allows 127.0.0.1 access by default, you can modify the `allow_admin` field in `conf/config.yaml` to specify a list of IPs that are allowed to call the Admin API. Also note that the Admin API uses key auth to verify the identity of the caller. **The `admin_key` field in `conf/config.yaml` needs to be modified before deployment to ensure security**.
- - [Python SDK](https://github.com/api7/apache-apisix-python-sdk)
- External Loggers: Export access logs to external log management tools. ([HTTP Logger](doc/plugins/http-logger.md), [TCP Logger](doc/plugins/tcp-logger.md), [Kafka Logger](doc/plugins/kafka-logger.md), [UDP Logger](doc/plugins/udp-logger.md))
- **Highly scalable**
@@ -121,7 +120,7 @@ APISIX Installed and tested in the following systems(OpenResty MUST >= 1.15.8.1,
CentOS 7, Ubuntu 16.04, Ubuntu 18.04, Debian 9, Debian 10, macOS, **ARM64** Ubuntu 18.04
Steps to install APISIX:
-1. Installation runtime dependencies: OpenResty and etcd, refer to [documentation](doc/install-dependencies.md)
+1. Installation runtime dependencies: Nginx and etcd, refer to [documentation](doc/install-dependencies.md)
2. There are several ways to install Apache APISIX:
- [Source Release](doc/how-to-build.md#installation-via-source-release)
- [RPM package](doc/how-to-build.md#installation-via-rpm-package-centos-7) for CentOS 7
@@ -171,8 +170,6 @@ Do not need to fill the user name and password, log in directly.
The dashboard only allows 127.0.0.1 by default, and you can modify `allow_admin` in `conf/config.yaml` by yourself, to list the list of IPs allowed to access.
-We provide an online dashboard [demo version](http://apisix.iresty.com), make it easier for you to understand APISIX.
-
## Benchmark
Using AWS's 8 core server, APISIX's QPS reach to 140,000 with a latency of only 0.2 ms.
diff --git a/README_CN.md b/README_CN.md
index d3e79098cbceb..408fc2227a1b0 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -29,7 +29,7 @@
APISIX 是一个云原生、高性能、可扩展的微服务 API 网关。
-它是基于 OpenResty 和 etcd 来实现,和传统 API 网关相比,APISIX 具备动态路由和插件热加载,特别适合微服务体系下的 API 管理。
+它是基于 Nginx 和 etcd 来实现,和传统 API 网关相比,APISIX 具备动态路由和插件热加载,特别适合微服务体系下的 API 管理。
## 为什么选择 APISIX?
@@ -39,8 +39,6 @@ APISIX 是基于云原生的微服务 API 网关,它是所有业务流量的
APISIX 通过插件机制,提供动态负载平衡、身份验证、限流限速等功能,并且支持你自己开发的插件。
-更多详细的信息,可以查阅[ APISIX 的白皮书](https://www.iresty.com/download/%E4%BC%81%E4%B8%9A%E7%94%A8%E6%88%B7%E5%A6%82%E4%BD%95%E9%80%89%E6%8B%A9%E5%BE%AE%E6%9C%8D%E5%8A%A1%20API%20%E7%BD%91%E5%85%B3.pdf)
-
![](doc/images/apisix.png)
## 功能
@@ -50,28 +48,29 @@ A/B 测试、金丝雀发布(灰度发布)、蓝绿部署、限流限速、抵
- **全平台**
- 云原生: 平台无关,没有供应商锁定,无论裸机还是 Kubernetes,APISIX 都可以运行。
- 运行环境: OpenResty 和 Tengine 都支持。
- - 支持 [ARM64](https://zhuanlan.zhihu.com/p/84467919): 不用担心底层技术的锁定。
+ - 支持 ARM64: 不用担心底层技术的锁定。
- **多协议**
- - [TCP/UDP 代理](doc/stream-proxy-cn.md): 动态 TCP/UDP 代理。
- - [动态 MQTT 代理](doc/plugins/mqtt-proxy-cn.md): 支持用 `client_id` 对 MQTT 进行负载均衡,同时支持 MQTT [3.1.*](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) 和 [5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html) 两个协议标准。
- - [gRPC 代理](doc/grpc-proxy-cn.md):通过 APISIX 代理 gRPC 连接,并使用 APISIX 的大部分特性管理你的 gRPC 服务。
+ - [TCP/UDP 代理](doc/zh-cn/stream-proxy.md): 动态 TCP/UDP 代理。
+ - [动态 MQTT 代理](doc/zh-cn/plugins/mqtt-proxy.md): 支持用 `client_id` 对 MQTT 进行负载均衡,同时支持 MQTT [3.1.*](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) 和 [5.0](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html) 两个协议标准。
+ - [gRPC 代理](doc/zh-cn/grpc-proxy.md):通过 APISIX 代理 gRPC 连接,并使用 APISIX 的大部分特性管理你的 gRPC 服务。
- [gRPC 协议转换](doc/plugins/grpc-transcoding-cn.md):支持协议的转换,这样客户端可以通过 HTTP/JSON 来访问你的 gRPC API。
- Websocket 代理
- Proxy Protocol
- Dubbo 代理:基于 Tengine,可以实现 Dubbo 请求的代理。
- HTTP(S) 反向代理
- - [SSL](doc/https-cn.md):动态加载 SSL 证书。
+ - [SSL](doc/zh-cn/https.md):动态加载 SSL 证书。
- **全动态能力**
- - [热更新和热插件](doc/plugins-cn.md): 无需重启服务,就可以持续更新配置和插件。
- - [代理请求重写](doc/plugins/proxy-rewrite-cn.md): 支持重写请求上游的`host`、`uri`、`schema`、`enable_websocket`、`headers`信息。
- - [输出内容重写](doc/plugins/response-rewrite-cn.md): 支持自定义修改返回内容的 `status code`、`body`、`headers`。
- - [Serverless](doc/plugins/serverless-cn.md): 在 APISIX 的每一个阶段,你都可以添加并调用自己编写的函数。
+ - [热更新和热插件](doc/zh-cn/plugins.md): 无需重启服务,就可以持续更新配置和插件。
+ - [代理请求重写](doc/zh-cn/plugins/proxy-rewrite.md): 支持重写请求上游的`host`、`uri`、`schema`、`enable_websocket`、`headers`信息。
+ - [输出内容重写](doc/zh-cn/plugins/response-rewrite.md): 支持自定义修改返回内容的 `status code`、`body`、`headers`。
+ - [Serverless](doc/zh-cn/plugins/serverless.md): 在 APISIX 的每一个阶段,你都可以添加并调用自己编写的函数。
- 动态负载均衡:动态支持有权重的 round-robin 负载平衡。
- 支持一致性 hash 的负载均衡:动态支持一致性 hash 的负载均衡。
- [健康检查](doc/health-check.md):启用上游节点的健康检查,将在负载均衡期间自动过滤不健康的节点,以确保系统稳定性。
- 熔断器: 智能跟踪不健康上游服务。
+ - [动态服务发现](doc/zh-cn/discovery.md):支持基于注册中心的服务发现功能,降低反向代理维护成本。
- **精细化路由**
- [支持全路径匹配和前缀匹配](doc/router-radixtree.md#how-to-use-libradixtree-in-apisix)
@@ -79,38 +78,38 @@ A/B 测试、金丝雀发布(灰度发布)、蓝绿部署、限流限速、抵
- 支持[各类操作符做为路由的判断条件](https://github.com/iresty/lua-resty-radixtree#operator-list),比如 `{"arg_age", ">", 24}`
- 支持[自定义路由匹配函数](https://github.com/iresty/lua-resty-radixtree/blob/master/t/filter-fun.t#L10)
- IPv6:支持使用 IPv6 格式匹配路由
- - 支持路由的[自动过期(TTL)](doc/admin-api-cn.md#route)
+ - 支持路由的[自动过期(TTL)](doc/zh-cn/admin-api.md#route)
- [支持路由的优先级](doc/router-radixtree.md#3-match-priority)
- - [支持批量 Http 请求](doc/plugins/batch-requests-cn.md)
+ - [支持批量 Http 请求](doc/zh-cn/plugins/batch-requests.md)
- **安全防护**
- - 多种身份认证方式: [key-auth](doc/plugins/key-auth-cn.md), [JWT](doc/plugins/jwt-auth-cn.md), [basic-auth](doc/plugins/basic-auth-cn.md), [wolf-rbac](doc/plugins/wolf-rbac-cn.md)。
- - [IP 黑白名单](doc/plugins/ip-restriction-cn.md)
+ - 多种身份认证方式: [key-auth](doc/zh-cn/plugins/key-auth.md), [JWT](doc/zh-cn/plugins/jwt-auth.md), [basic-auth](doc/zh-cn/plugins/basic-auth.md), [wolf-rbac](doc/zh-cn/plugins/wolf-rbac.md)。
+ - [IP 黑白名单](doc/zh-cn/plugins/ip-restriction.md)
- [IdP 支持](doc/plugins/oauth.md): 支持外部的身份认证服务,比如 Auth0,Okta,Authing 等,用户可以借此来对接 Oauth2.0 等认证方式。
- - [限制速率](doc/plugins/limit-req-cn.md)
- - [限制请求数](doc/plugins/limit-count-cn.md)
- - [限制并发](doc/plugins/limit-conn-cn.md)
+ - [限制速率](doc/zh-cn/plugins/limit-req.md)
+ - [限制请求数](doc/zh-cn/plugins/limit-count.md)
+ - [限制并发](doc/zh-cn/plugins/limit-conn.md)
- 防御 ReDoS(正则表达式拒绝服务):内置策略,无需配置即可抵御 ReDoS。
- - [CORS](doc/plugins/cors-cn.md)
+ - [CORS](doc/zh-cn/plugins/cors.md):为你的API启用 CORS。
+ - [uri-blocker](plugins/uri-blocker.md):根据 URI 拦截用户请求。
- **运维友好**
- - OpenTracing 可观测性: [支持 Apache Skywalking 和 Zipkin](doc/plugins/zipkin-cn.md)。
- - 监控和指标: [Prometheus](doc/plugins/prometheus-cn.md)
+ - OpenTracing 可观测性: 支持 [Apache Skywalking](doc/zh-cn/plugins/skywalking.md) 和 [Zipkin](doc/zh-cn/plugins/zipkin.md)。
+ - 监控和指标: [Prometheus](doc/zh-cn/plugins/prometheus.md)
- 集群:APISIX 节点是无状态的,创建配置中心集群请参考 [etcd Clustering Guide](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/clustering.md)。
- 高可用:支持配置同一个集群内的多个 etcd 地址。
- 控制台: 内置控制台来操作 APISIX 集群。
- 版本控制:支持操作的多次回滚。
- CLI: 使用命令行来启动、关闭和重启 APISIX。
- - [单机模式](doc/stand-alone-cn.md): 支持从本地配置文件中加载路由规则,在 kubernetes(k8s) 等环境下更友好。
- - [全局规则](doc/architecture-design-cn.md#Global-Rule):允许对所有请求执行插件,比如黑白名单、限流限速等。
+ - [单机模式](doc/zh-cn/stand-alone.md): 支持从本地配置文件中加载路由规则,在 kubernetes(k8s) 等环境下更友好。
+ - [全局规则](doc/zh-cn/architecture-design.md#Global-Rule):允许对所有请求执行插件,比如黑白名单、限流限速等。
- 高性能:在单核上 QPS 可以达到 18k,同时延迟只有 0.2 毫秒。
- - [故障注入](doc/plugins/fault-injection-cn.md)
- - [REST Admin API](doc/admin-api-cn.md): 使用 REST Admin API 来控制 Apache APISIX,默认只允许 127.0.0.1 访问,你可以修改 `conf/config.yaml` 中的 `allow_admin` 字段,指定允许调用 Admin API 的 IP 列表。同时需要注意的是,Admin API 使用 key auth 来校验调用者身份,**在部署前需要修改 `conf/config.yaml` 中的 `admin_key` 字段,来保证安全。**
- - [Python SDK](https://github.com/api7/apache-apisix-python-sdk)
+ - [故障注入](doc/zh-cn/plugins/fault-injection.md)
+ - [REST Admin API](doc/zh-cn/admin-api.md): 使用 REST Admin API 来控制 Apache APISIX,默认只允许 127.0.0.1 访问,你可以修改 `conf/config.yaml` 中的 `allow_admin` 字段,指定允许调用 Admin API 的 IP 列表。同时需要注意的是,Admin API 使用 key auth 来校验调用者身份,**在部署前需要修改 `conf/config.yaml` 中的 `admin_key` 字段,来保证安全。**
- 外部日志记录器:将访问日志导出到外部日志管理工具。([HTTP Logger](doc/plugins/http-logger.md), [TCP Logger](doc/plugins/tcp-logger.md), [Kafka Logger](doc/plugins/kafka-logger.md), [UDP Logger](doc/plugins/udp-logger.md))
- **高度可扩展**
- - [自定义插件](doc/plugin-develop-cn.md): 允许挂载常见阶段,例如`init`, `rewrite`,`access`,`balancer`,`header filer`,`body filter` 和 `log` 阶段。
+ - [自定义插件](doc/zh-cn/plugin-develop.md): 允许挂载常见阶段,例如`init`, `rewrite`,`access`,`balancer`,`header filer`,`body filter` 和 `log` 阶段。
- 自定义负载均衡算法:可以在 `balancer` 阶段使用自定义负载均衡算法。
- 自定义路由: 支持用户自己实现路由算法。
@@ -118,14 +117,14 @@ A/B 测试、金丝雀发布(灰度发布)、蓝绿部署、限流限速、抵
APISIX 在以下操作系统中可顺利安装并做过运行测试,需要注意的是:OpenResty 的版本必须 >= 1.15.8.1:
-CentOS 7, Ubuntu 16.04, Ubuntu 18.04, Debian 9, Debian 10, macOS, **[ARM64](https://zhuanlan.zhihu.com/p/84467919)** Ubuntu 18.04
+CentOS 7, Ubuntu 16.04, Ubuntu 18.04, Debian 9, Debian 10, macOS, **ARM64** Ubuntu 18.04
安装 APISIX 的步骤:
-1. 安装运行时依赖:OpenResty 和 etcd,参考[依赖安装文档](doc/install-dependencies.md)
+1. 安装运行时依赖:OpenResty 和 etcd,参考[依赖安装文档](doc/zh-cn/install-dependencies.md)
2. 有以下几种方式来安装 Apache APISIX:
- - 通过[源码包安装](doc/how-to-build-cn.md#通过源码包安装);
- - 如果你在使用 CentOS 7,可以使用 [RPM 包安装](doc/how-to-build-cn.md#通过-rpm-包安装centos-7);
- - 其它 Linux 操作系统,可以使用 [Luarocks 安装方式](doc/how-to-build-cn.md#通过-luarocks-安装-不支持-macos);
+ - 通过[源码包安装](doc/zh-cn/how-to-build.md#通过源码包安装);
+ - 如果你在使用 CentOS 7,可以使用 [RPM 包安装](doc/zh-cn/how-to-build.md#通过-rpm-包安装centos-7);
+ - 其它 Linux 操作系统,可以使用 [Luarocks 安装方式](doc/zh-cn/how-to-build.md#通过-luarocks-安装-不支持-macos);
- 你也可以使用 [Docker 镜像](https://github.com/apache/incubator-apisix-docker) 来安装。
## 快速上手
@@ -138,9 +137,9 @@ sudo apisix start
2. 入门指南
-入门指南是学习 APISIX 基础知识的好方法。按照 [入门指南](doc/getting-started-cn.md)的步骤即可。
+入门指南是学习 APISIX 基础知识的好方法。按照 [入门指南](doc/zh-cn/getting-started.md)的步骤即可。
-更进一步,你可以跟着文档来尝试更多的[插件](doc/README_CN.md#插件)。
+更进一步,你可以跟着文档来尝试更多的[插件](doc/zh-cn/README.md#插件)。
## 控制台
@@ -172,15 +171,13 @@ cp -r dist/* .
Dashboard 默认只允许 127.0.0.1 访问。你可以自行修改 `conf/config.yaml` 中的 `allow_admin` 字段,指定允许访问 dashboard 的 IP 列表。
-我们部署了一个在线的 [Dashboard](http://apisix.iresty.com) ,方便你了解 APISIX。
-
## 性能测试
使用 AWS 的 8 核心服务器来压测 APISIX,QPS 可以达到 140000,同时延时只有 0.2 毫秒。
## 文档
-[Apache APISIX 文档索引](doc/README_CN.md)
+[Apache APISIX 文档索引](doc/zh-cn/README.md)
## Apache APISIX 和 Kong 的比较
@@ -241,7 +238,7 @@ Dashboard 默认只允许 127.0.0.1 访问。你可以自行修改 `conf/config.
- [思必驰:为什么我们重新写了一个 k8s ingress controller?](https://mp.weixin.qq.com/s/bmm2ibk2V7-XYneLo9XAPQ)
## APISIX 的用户有哪些?
-有很多公司和组织把 APISIX 用户学习、研究、生产环境和商业产品中,包括:
+有很多公司和组织把 APISIX 用于学习、研究、生产环境和商业产品中,包括:
diff --git a/apisix/admin/global_rules.lua b/apisix/admin/global_rules.lua
index c74d7739d2cf6..a768012f99604 100644
--- a/apisix/admin/global_rules.lua
+++ b/apisix/admin/global_rules.lua
@@ -43,6 +43,8 @@ local function check_conf(id, conf, need_id)
return nil, {error_msg = "wrong route id"}
end
+ conf.id = id
+
core.log.info("schema: ", core.json.delay_encode(core.schema.global_rule))
core.log.info("conf : ", core.json.delay_encode(conf))
local ok, err = core.schema.check(core.schema.global_rule, conf)
@@ -104,19 +106,19 @@ function _M.delete(id)
end
-function _M.patch(id, conf, sub_path)
+function _M.patch(id, conf)
if not id then
return 400, {error_msg = "missing global rule id"}
end
- if not sub_path then
- return 400, {error_msg = "missing sub-path"}
- end
-
if not conf then
return 400, {error_msg = "missing new configuration"}
end
+ if type(conf) ~= "table" then
+ return 400, {error_msg = "invalid configuration"}
+ end
+
local key = "/global_rules/" .. id
local res_old, err = core.etcd.get(key)
if not res_old then
@@ -131,32 +133,9 @@ function _M.patch(id, conf, sub_path)
core.json.delay_encode(res_old, true))
local node_value = res_old.body.node.value
- local sub_value = node_value
- local sub_paths = core.utils.split_uri(sub_path)
- for i = 1, #sub_paths - 1 do
- local sub_name = sub_paths[i]
- if sub_value[sub_name] == nil then
- sub_value[sub_name] = {}
- end
- sub_value = sub_value[sub_name]
+ node_value = core.table.merge(node_value, conf);
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /"
- .. core.table.concat(sub_paths, 1, i)
- end
- end
-
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /" .. sub_path
- end
-
- local sub_name = sub_paths[#sub_paths]
- if sub_name and sub_name ~= "" then
- sub_value[sub_name] = conf
- else
- node_value = conf
- end
core.log.info("new conf: ", core.json.delay_encode(node_value, true))
local ok, err = check_conf(id, node_value, true)
diff --git a/apisix/admin/plugins.lua b/apisix/admin/plugins.lua
index 7d6262c59c103..7b835e10ca436 100644
--- a/apisix/admin/plugins.lua
+++ b/apisix/admin/plugins.lua
@@ -18,9 +18,13 @@ local core = require("apisix.core")
local local_plugins = require("apisix.plugin").plugins_hash
local stream_local_plugins = require("apisix.plugin").stream_plugins_hash
local pairs = pairs
+local ipairs = ipairs
local pcall = pcall
local require = require
local table_remove = table.remove
+local table_sort = table.sort
+local table_insert = table.insert
+
local _M = {
version = 0.1,
@@ -114,7 +118,23 @@ function _M.get_plugins_list()
table_remove(plugins, 1)
end
- return plugins
+ local priorities = {}
+ local success = {}
+ for i, name in ipairs(plugins) do
+ local plugin_name = "apisix.plugins." .. name
+ local ok, plugin = pcall(require, plugin_name)
+ if ok and plugin.priority then
+ priorities[name] = plugin.priority
+ table_insert(success, name)
+ end
+ end
+
+ local function cmp(x, y)
+ return priorities[x] > priorities[y]
+ end
+
+ table_sort(success, cmp)
+ return success
end
diff --git a/apisix/admin/routes.lua b/apisix/admin/routes.lua
index 3303e8dc0d0cf..2ce284be71c93 100644
--- a/apisix/admin/routes.lua
+++ b/apisix/admin/routes.lua
@@ -45,6 +45,8 @@ local function check_conf(id, conf, need_id)
return nil, {error_msg = "wrong route id"}
end
+ conf.id = id
+
core.log.info("schema: ", core.json.delay_encode(core.schema.route))
core.log.info("conf : ", core.json.delay_encode(conf))
local ok, err = core.schema.check(core.schema.route, conf)
@@ -135,7 +137,7 @@ function _M.put(id, conf, sub_path, args)
local key = "/routes/" .. id
local res, err = core.etcd.set(key, conf, args.ttl)
if not res then
- core.log.error("failed to put route[", key, "]: ", err)
+ core.log.error("failed to put route[", key, "] to etcd: ", err)
return 500, {error_msg = err}
end
@@ -151,7 +153,7 @@ function _M.get(id)
local res, err = core.etcd.get(key)
if not res then
- core.log.error("failed to get route[", key, "]: ", err)
+ core.log.error("failed to get route[", key, "] from etcd: ", err)
return 500, {error_msg = err}
end
@@ -169,7 +171,7 @@ function _M.post(id, conf, sub_path, args)
-- core.log.info("key: ", key)
local res, err = core.etcd.push("/routes", conf, args.ttl)
if not res then
- core.log.error("failed to post route[", key, "]: ", err)
+ core.log.error("failed to post route[", key, "] to etcd: ", err)
return 500, {error_msg = err}
end
@@ -186,7 +188,7 @@ function _M.delete(id)
-- core.log.info("key: ", key)
local res, err = core.etcd.delete(key)
if not res then
- core.log.error("failed to delete route[", key, "]: ", err)
+ core.log.error("failed to delete route[", key, "] in etcd: ", err)
return 500, {error_msg = err}
end
@@ -194,19 +196,19 @@ function _M.delete(id)
end
-function _M.patch(id, conf, sub_path, args)
+function _M.patch(id, conf, args)
if not id then
return 400, {error_msg = "missing route id"}
end
- if not sub_path then
- return 400, {error_msg = "missing sub-path"}
- end
-
if not conf then
return 400, {error_msg = "missing new configuration"}
end
+ if type(conf) ~= "table" then
+ return 400, {error_msg = "invalid configuration"}
+ end
+
local key = "/routes"
if id then
key = key .. "/" .. id
@@ -214,7 +216,7 @@ function _M.patch(id, conf, sub_path, args)
local res_old, err = core.etcd.get(key)
if not res_old then
- core.log.error("failed to get route [", key, "]: ", err)
+ core.log.error("failed to get route [", key, "] in etcd: ", err)
return 500, {error_msg = err}
end
@@ -224,33 +226,11 @@ function _M.patch(id, conf, sub_path, args)
core.log.info("key: ", key, " old value: ",
core.json.delay_encode(res_old, true))
- local node_value = res_old.body.node.value
- local sub_value = node_value
- local sub_paths = core.utils.split_uri(sub_path)
- for i = 1, #sub_paths - 1 do
- local sub_name = sub_paths[i]
- if sub_value[sub_name] == nil then
- sub_value[sub_name] = {}
- end
- sub_value = sub_value[sub_name]
-
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /"
- .. core.table.concat(sub_paths, 1, i)
- end
- end
+ local node_value = res_old.body.node.value
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /" .. sub_path
- end
+ node_value = core.table.merge(node_value, conf);
- local sub_name = sub_paths[#sub_paths]
- if sub_name and sub_name ~= "" then
- sub_value[sub_name] = conf
- else
- node_value = conf
- end
core.log.info("new conf: ", core.json.delay_encode(node_value, true))
local id, err = check_conf(id, node_value, true)
@@ -261,7 +241,7 @@ function _M.patch(id, conf, sub_path, args)
-- TODO: this is not safe, we need to use compare-set
local res, err = core.etcd.set(key, node_value, args.ttl)
if not res then
- core.log.error("failed to set new route[", key, "]: ", err)
+ core.log.error("failed to set new route[", key, "] to etcd: ", err)
return 500, {error_msg = err}
end
diff --git a/apisix/admin/services.lua b/apisix/admin/services.lua
index e26ea41e63362..c10a215fd6102 100644
--- a/apisix/admin/services.lua
+++ b/apisix/admin/services.lua
@@ -20,7 +20,6 @@ local schema_plugin = require("apisix.admin.plugins").check_schema
local upstreams = require("apisix.admin.upstreams")
local tostring = tostring
local ipairs = ipairs
-local tonumber = tonumber
local type = type
@@ -47,6 +46,7 @@ local function check_conf(id, conf, need_id)
return nil, {error_msg = "wrong service id"}
end
+ conf.id = id
core.log.info("schema: ", core.json.delay_encode(core.schema.service))
core.log.info("conf : ", core.json.delay_encode(conf))
@@ -55,7 +55,7 @@ local function check_conf(id, conf, need_id)
return nil, {error_msg = "invalid configuration: " .. err}
end
- if need_id and not tonumber(id) then
+ if need_id and not id then
return nil, {error_msg = "wrong type of service id"}
end
@@ -177,19 +177,19 @@ function _M.delete(id)
end
-function _M.patch(id, conf, sub_path)
+function _M.patch(id, conf)
if not id then
return 400, {error_msg = "missing service id"}
end
- if not sub_path then
- return 400, {error_msg = "missing sub-path"}
- end
-
if not conf then
return 400, {error_msg = "missing new configuration"}
end
+ if type(conf) ~= "table" then
+ return 400, {error_msg = "invalid configuration"}
+ end
+
local key = "/services" .. "/" .. id
local res_old, err = core.etcd.get(key)
if not res_old then
@@ -204,32 +204,9 @@ function _M.patch(id, conf, sub_path)
core.json.delay_encode(res_old, true))
local new_value = res_old.body.node.value
- local sub_value = new_value
- local sub_paths = core.utils.split_uri(sub_path)
- for i = 1, #sub_paths - 1 do
- local sub_name = sub_paths[i]
- if sub_value[sub_name] == nil then
- sub_value[sub_name] = {}
- end
- sub_value = sub_value[sub_name]
+ new_value = core.table.merge(new_value, conf);
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /"
- .. core.table.concat(sub_paths, 1, i)
- end
- end
-
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /" .. sub_path
- end
-
- local sub_name = sub_paths[#sub_paths]
- if sub_name and sub_name ~= "" then
- sub_value[sub_name] = conf
- else
- new_value = conf
- end
core.log.info("new value ", core.json.delay_encode(new_value, true))
local id, err = check_conf(id, new_value, true)
diff --git a/apisix/admin/ssl.lua b/apisix/admin/ssl.lua
index 898d9c1a988f3..6d9307d95d1da 100644
--- a/apisix/admin/ssl.lua
+++ b/apisix/admin/ssl.lua
@@ -14,10 +14,13 @@
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
-local core = require("apisix.core")
-local schema_plugin = require("apisix.admin.plugins").check_schema
-local tostring = tostring
-
+local core = require("apisix.core")
+local tostring = tostring
+local aes = require "resty.aes"
+local ngx_encode_base64 = ngx.encode_base64
+local str_find = string.find
+local type = type
+local assert = assert
local _M = {
version = 0.1,
@@ -42,6 +45,8 @@ local function check_conf(id, conf, need_id)
return nil, {error_msg = "wrong ssl id"}
end
+ conf.id = id
+
core.log.info("schema: ", core.json.delay_encode(core.schema.ssl))
core.log.info("conf : ", core.json.delay_encode(conf))
local ok, err = core.schema.check(core.schema.ssl, conf)
@@ -49,48 +54,31 @@ local function check_conf(id, conf, need_id)
return nil, {error_msg = "invalid configuration: " .. err}
end
- local upstream_id = conf.upstream_id
- if upstream_id then
- local key = "/upstreams/" .. upstream_id
- local res, err = core.etcd.get(key)
- if not res then
- return nil, {error_msg = "failed to fetch upstream info by "
- .. "upstream id [" .. upstream_id .. "]: "
- .. err}
- end
-
- if res.status ~= 200 then
- return nil, {error_msg = "failed to fetch upstream info by "
- .. "upstream id [" .. upstream_id .. "], "
- .. "response code: " .. res.status}
- end
- end
+ return need_id and id or true
+end
- local service_id = conf.service_id
- if service_id then
- local key = "/services/" .. service_id
- local res, err = core.etcd.get(key)
- if not res then
- return nil, {error_msg = "failed to fetch service info by "
- .. "service id [" .. service_id .. "]: "
- .. err}
- end
- if res.status ~= 200 then
- return nil, {error_msg = "failed to fetch service info by "
- .. "service id [" .. service_id .. "], "
- .. "response code: " .. res.status}
- end
+local function aes_encrypt(origin)
+ local local_conf = core.config.local_conf()
+ local iv
+ if local_conf and local_conf.apisix
+ and local_conf.apisix.ssl.key_encrypt_salt then
+ iv = local_conf.apisix.ssl.key_encrypt_salt
end
+ local aes_128_cbc_with_iv = (type(iv)=="string" and #iv == 16) and
+ assert(aes:new(iv, nil, aes.cipher(128, "cbc"), {iv=iv})) or nil
- if conf.plugins then
- local ok, err = schema_plugin(conf.plugins)
- if not ok then
- return nil, {error_msg = err}
+ if aes_128_cbc_with_iv ~= nil and str_find(origin, "---") then
+ local encrypted = aes_128_cbc_with_iv:encrypt(origin)
+ if encrypted == nil then
+ core.log.error("failed to encrypt key[", origin, "] ")
+ return origin
end
+
+ return ngx_encode_base64(encrypted)
end
- return need_id and id or true
+ return origin
end
@@ -100,6 +88,9 @@ function _M.put(id, conf)
return 400, err
end
+ -- encrypt private key
+ conf.key = aes_encrypt(conf.key)
+
local key = "/ssl/" .. id
local res, err = core.etcd.set(key, conf)
if not res then
@@ -138,6 +129,9 @@ function _M.post(id, conf)
return 400, err
end
+ -- encrypt private key
+ conf.key = aes_encrypt(conf.key)
+
local key = "/ssl"
-- core.log.info("key: ", key)
local res, err = core.etcd.push("/ssl", conf)
@@ -167,4 +161,57 @@ function _M.delete(id)
end
+function _M.patch(id, conf)
+ if not id then
+ return 400, {error_msg = "missing route id"}
+ end
+
+ if not conf then
+ return 400, {error_msg = "missing new configuration"}
+ end
+
+ if type(conf) ~= "table" then
+ return 400, {error_msg = "invalid configuration"}
+ end
+
+ local key = "/ssl"
+ if id then
+ key = key .. "/" .. id
+ end
+
+ local res_old, err = core.etcd.get(key)
+ if not res_old then
+ core.log.error("failed to get ssl [", key, "] in etcd: ", err)
+ return 500, {error_msg = err}
+ end
+
+ if res_old.status ~= 200 then
+ return res_old.status, res_old.body
+ end
+ core.log.info("key: ", key, " old value: ",
+ core.json.delay_encode(res_old, true))
+
+
+ local node_value = res_old.body.node.value
+
+ node_value = core.table.merge(node_value, conf);
+
+ core.log.info("new ssl conf: ", core.json.delay_encode(node_value, true))
+
+ local id, err = check_conf(id, node_value, true)
+ if not id then
+ return 400, err
+ end
+
+ -- TODO: this is not safe, we need to use compare-set
+ local res, err = core.etcd.set(key, node_value)
+ if not res then
+ core.log.error("failed to set new ssl[", key, "] to etcd: ", err)
+ return 500, {error_msg = err}
+ end
+
+ return res.status, res.body
+end
+
+
return _M
diff --git a/apisix/admin/stream_routes.lua b/apisix/admin/stream_routes.lua
index e806da5e01d6b..969f775164e63 100644
--- a/apisix/admin/stream_routes.lua
+++ b/apisix/admin/stream_routes.lua
@@ -31,17 +31,19 @@ local function check_conf(id, conf, need_id)
id = id or conf.id
if need_id and not id then
- return nil, {error_msg = "missing stream stream route id"}
+ return nil, {error_msg = "missing stream route id"}
end
if not need_id and id then
- return nil, {error_msg = "wrong stream stream route id, do not need it"}
+ return nil, {error_msg = "wrong stream route id, do not need it"}
end
if need_id and conf.id and tostring(conf.id) ~= tostring(id) then
- return nil, {error_msg = "wrong stream stream route id"}
+ return nil, {error_msg = "wrong stream route id"}
end
+ conf.id = id
+
core.log.info("schema: ", core.json.delay_encode(core.schema.stream_route))
core.log.info("conf : ", core.json.delay_encode(conf))
local ok, err = core.schema.check(core.schema.stream_route, conf)
@@ -129,7 +131,7 @@ end
function _M.delete(id)
if not id then
- return 400, {error_msg = "missing stream stream route id"}
+ return 400, {error_msg = "missing stream route id"}
end
local key = "/stream_routes/" .. id
diff --git a/apisix/admin/upstreams.lua b/apisix/admin/upstreams.lua
index e989cd5527831..f09093ec8aae8 100644
--- a/apisix/admin/upstreams.lua
+++ b/apisix/admin/upstreams.lua
@@ -100,9 +100,7 @@ local function check_conf(id, conf, need_id)
end
-- let schema check id
- if id and not conf.id then
- conf.id = id
- end
+ conf.id = id
core.log.info("schema: ", core.json.delay_encode(core.schema.upstream))
core.log.info("conf : ", core.json.delay_encode(conf))
@@ -213,19 +211,19 @@ function _M.delete(id)
end
-function _M.patch(id, conf, sub_path)
+function _M.patch(id, conf)
if not id then
return 400, {error_msg = "missing upstream id"}
end
- if not sub_path then
- return 400, {error_msg = "missing sub-path"}
- end
-
if not conf then
return 400, {error_msg = "missing new configuration"}
end
+ if type(conf) ~= "table" then
+ return 400, {error_msg = "invalid configuration"}
+ end
+
local key = "/upstreams" .. "/" .. id
local res_old, err = core.etcd.get(key)
if not res_old then
@@ -240,32 +238,9 @@ function _M.patch(id, conf, sub_path)
core.json.delay_encode(res_old, true))
local new_value = res_old.body.node.value
- local sub_value = new_value
- local sub_paths = core.utils.split_uri(sub_path)
- for i = 1, #sub_paths - 1 do
- local sub_name = sub_paths[i]
- if sub_value[sub_name] == nil then
- sub_value[sub_name] = {}
- end
- sub_value = sub_value[sub_name]
+ new_value = core.table.merge(new_value, conf);
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /"
- .. core.table.concat(sub_paths, 1, i)
- end
- end
-
- if type(sub_value) ~= "table" then
- return 400, "invalid sub-path: /" .. sub_path
- end
-
- local sub_name = sub_paths[#sub_paths]
- if sub_name and sub_name ~= "" then
- sub_value[sub_name] = conf
- else
- new_value = conf
- end
core.log.info("new value ", core.json.delay_encode(new_value, true))
local id, err = check_conf(id, new_value, true)
diff --git a/apisix/balancer.lua b/apisix/balancer.lua
index a5134bcbd9280..1675128db0a81 100644
--- a/apisix/balancer.lua
+++ b/apisix/balancer.lua
@@ -14,23 +14,23 @@
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
-local healthcheck = require("resty.healthcheck")
-local roundrobin = require("resty.roundrobin")
-local resty_chash = require("resty.chash")
+local healthcheck
+local require = require
+local discovery = require("apisix.discovery.init").discovery
local balancer = require("ngx.balancer")
local core = require("apisix.core")
-local error = error
-local str_char = string.char
-local str_gsub = string.gsub
-local pairs = pairs
+local ipairs = ipairs
local tostring = tostring
local set_more_tries = balancer.set_more_tries
local get_last_failure = balancer.get_last_failure
local set_timeouts = balancer.set_timeouts
-local upstreams_etcd
local module_name = "balancer"
+local pickers = {
+ roundrobin = require("apisix.balancer.roundrobin"),
+ chash = require("apisix.balancer.chash"),
+}
local lrucache_server_picker = core.lrucache.new({
@@ -39,33 +39,43 @@ local lrucache_server_picker = core.lrucache.new({
local lrucache_checker = core.lrucache.new({
ttl = 300, count = 256
})
+local lrucache_addr = core.lrucache.new({
+ ttl = 300, count = 1024 * 4
+})
local _M = {
- version = 0.1,
+ version = 0.2,
name = module_name,
}
local function fetch_health_nodes(upstream, checker)
+ local nodes = upstream.nodes
if not checker then
- return upstream.nodes
+ local new_nodes = core.table.new(0, #nodes)
+ for _, node in ipairs(nodes) do
+ -- TODO filter with metadata
+ new_nodes[node.host .. ":" .. node.port] = node.weight
+ end
+ return new_nodes
end
local host = upstream.checks and upstream.checks.host
- local up_nodes = core.table.new(0, core.table.nkeys(upstream.nodes))
-
- for addr, weight in pairs(upstream.nodes) do
- local ip, port = core.utils.parse_addr(addr)
- local ok = checker:get_target_status(ip, port, host)
+ local up_nodes = core.table.new(0, #nodes)
+ for _, node in ipairs(nodes) do
+ local ok = checker:get_target_status(node.host, node.port, host)
if ok then
- up_nodes[addr] = weight
+ -- TODO filter with metadata
+ up_nodes[node.host .. ":" .. node.port] = node.weight
end
end
if core.table.nkeys(up_nodes) == 0 then
core.log.warn("all upstream nodes is unhealth, use default")
- up_nodes = upstream.nodes
+ for _, node in ipairs(nodes) do
+ up_nodes[node.host .. ":" .. node.port] = node.weight
+ end
end
return up_nodes
@@ -73,18 +83,19 @@ end
local function create_checker(upstream, healthcheck_parent)
+ if healthcheck == nil then
+ healthcheck = require("resty.healthcheck")
+ end
local checker = healthcheck.new({
name = "upstream#" .. healthcheck_parent.key,
shm_name = "upstream-healthcheck",
checks = upstream.checks,
})
-
- for addr, weight in pairs(upstream.nodes) do
- local ip, port = core.utils.parse_addr(addr)
- local ok, err = checker:add_target(ip, port, upstream.checks.host)
+ for _, node in ipairs(upstream.nodes) do
+ local ok, err = checker:add_target(node.host, node.port, upstream.checks.host)
if not ok then
- core.log.error("failed to add new health check target: ", addr,
- " err: ", err)
+ core.log.error("failed to add new health check target: ", node.host, ":", node.port,
+ " err: ", err)
end
end
@@ -122,118 +133,60 @@ local function fetch_healthchecker(upstream, healthcheck_parent, version)
end
-local function fetch_chash_hash_key(ctx, upstream)
- local key = upstream.key
- local hash_on = upstream.hash_on or "vars"
- local chash_key
-
- if hash_on == "consumer" then
- chash_key = ctx.consumer_id
- elseif hash_on == "vars" then
- chash_key = ctx.var[key]
- elseif hash_on == "header" then
- chash_key = ctx.var["http_" .. key]
- elseif hash_on == "cookie" then
- chash_key = ctx.var["cookie_" .. key]
- end
-
- if not chash_key then
- chash_key = ctx.var["remote_addr"]
- core.log.warn("chash_key fetch is nil, use default chash_key ",
- "remote_addr: ", chash_key)
- end
- core.log.info("upstream key: ", key)
- core.log.info("hash_on: ", hash_on)
- core.log.info("chash_key: ", core.json.delay_encode(chash_key))
-
- return chash_key
-end
-
-
local function create_server_picker(upstream, checker)
- if upstream.type == "roundrobin" then
+ local picker = pickers[upstream.type]
+ if picker then
local up_nodes = fetch_health_nodes(upstream, checker)
core.log.info("upstream nodes: ", core.json.delay_encode(up_nodes))
- local picker = roundrobin:new(up_nodes)
- return {
- upstream = upstream,
- get = function ()
- return picker:find()
- end
- }
+ return picker.new(up_nodes, upstream)
end
- if upstream.type == "chash" then
- local up_nodes = fetch_health_nodes(upstream, checker)
- core.log.info("upstream nodes: ", core.json.delay_encode(up_nodes))
-
- local str_null = str_char(0)
-
- local servers, nodes = {}, {}
- for serv, weight in pairs(up_nodes) do
- local id = str_gsub(serv, ":", str_null)
-
- servers[id] = serv
- nodes[id] = weight
- end
+ return nil, "invalid balancer type: " .. upstream.type, 0
+end
- local picker = resty_chash:new(nodes)
- return {
- upstream = upstream,
- get = function (ctx)
- local chash_key = fetch_chash_hash_key(ctx, upstream)
- local id = picker:find(chash_key)
- -- core.log.warn("chash id: ", id, " val: ", servers[id])
- return servers[id]
- end
- }
- end
- return nil, "invalid balancer type: " .. upstream.type, 0
+local function parse_addr(addr)
+ local host, port, err = core.utils.parse_addr(addr)
+ return {host = host, port = port}, err
end
local function pick_server(route, ctx)
core.log.info("route: ", core.json.delay_encode(route, true))
core.log.info("ctx: ", core.json.delay_encode(ctx, true))
- local healthcheck_parent = route
- local up_id = route.value.upstream_id
- local up_conf = (route.dns_value and route.dns_value.upstream)
- or route.value.upstream
- if not up_id and not up_conf then
- return nil, nil, "missing upstream configuration"
+ local up_conf = ctx.upstream_conf
+ if up_conf.service_name then
+ if not discovery then
+ return nil, "discovery is uninitialized"
+ end
+ up_conf.nodes = discovery.nodes(up_conf.service_name)
end
- local version
- local key
-
- if up_id then
- if not upstreams_etcd then
- return nil, nil, "need to create a etcd instance for fetching "
- .. "upstream information"
- end
+ local nodes_count = up_conf.nodes and #up_conf.nodes or 0
+ if nodes_count == 0 then
+ return nil, "no valid upstream node"
+ end
- local up_obj = upstreams_etcd:get(tostring(up_id))
- if not up_obj then
- return nil, nil, "failed to find upstream by id: " .. up_id
+ if up_conf.timeout then
+ local timeout = up_conf.timeout
+ local ok, err = set_timeouts(timeout.connect, timeout.send,
+ timeout.read)
+ if not ok then
+ core.log.error("could not set upstream timeouts: ", err)
end
- core.log.info("upstream: ", core.json.delay_encode(up_obj))
-
- healthcheck_parent = up_obj
- up_conf = up_obj.dns_value or up_obj.value
- version = up_obj.modifiedIndex
- key = up_conf.type .. "#upstream_" .. up_id
-
- else
- version = ctx.conf_version
- key = up_conf.type .. "#route_" .. route.value.id
end
- if core.table.nkeys(up_conf.nodes) == 0 then
- return nil, nil, "no valid upstream node"
+ if nodes_count == 1 then
+ local node = up_conf.nodes[1]
+ ctx.balancer_ip = node.host
+ ctx.balancer_port = node.port
+ return node
end
+ local healthcheck_parent = ctx.upstream_healthcheck_parent
+ local version = ctx.upstream_version
+ local key = ctx.upstream_key
local checker = fetch_healthchecker(up_conf, healthcheck_parent, version)
ctx.balancer_try_count = (ctx.balancer_try_count or 0) + 1
@@ -256,11 +209,10 @@ local function pick_server(route, ctx)
if ctx.balancer_try_count == 1 then
local retries = up_conf.retries
- if retries and retries > 0 then
- set_more_tries(retries)
- else
- set_more_tries(core.table.nkeys(up_conf.nodes))
+ if not retries or retries <= 0 then
+ retries = #up_conf.nodes
end
+ set_more_tries(retries)
end
if checker then
@@ -270,45 +222,43 @@ local function pick_server(route, ctx)
local server_picker = lrucache_server_picker(key, version,
create_server_picker, up_conf, checker)
if not server_picker then
- return nil, nil, "failed to fetch server picker"
+ return nil, "failed to fetch server picker"
end
local server, err = server_picker.get(ctx)
if not server then
err = err or "no valid upstream node"
- return nil, nil, "failed to find valid upstream server, " .. err
+ return nil, "failed to find valid upstream server, " .. err
end
- if up_conf.timeout then
- local timeout = up_conf.timeout
- local ok, err = set_timeouts(timeout.connect, timeout.send,
- timeout.read)
- if not ok then
- core.log.error("could not set upstream timeouts: ", err)
- end
+ local res, err = lrucache_addr(server, nil, parse_addr, server)
+ ctx.balancer_ip = res.host
+ ctx.balancer_port = res.port
+ -- core.log.info("proxy to ", host, ":", port)
+ if err then
+ core.log.error("failed to parse server addr: ", server, " err: ", err)
+ return core.response.exit(502)
end
- local ip, port, err = core.utils.parse_addr(server)
- ctx.balancer_ip = ip
- ctx.balancer_port = port
-
- return ip, port, err
+ return res
end
+
+
-- for test
_M.pick_server = pick_server
function _M.run(route, ctx)
- local ip, port, err = pick_server(route, ctx)
- if err then
+ local server, err = pick_server(route, ctx)
+ if not server then
core.log.error("failed to pick server: ", err)
return core.response.exit(502)
end
- local ok, err = balancer.set_current_peer(ip, port)
+ local ok, err = balancer.set_current_peer(server.host, server.port)
if not ok then
- core.log.error("failed to set server peer [", ip, ":", port,
- "] err: ", err)
+ core.log.error("failed to set server peer [", server.host, ":",
+ server.port, "] err: ", err)
return core.response.exit(502)
end
@@ -317,34 +267,6 @@ end
function _M.init_worker()
- local err
- upstreams_etcd, err = core.config.new("/upstreams", {
- automatic = true,
- item_schema = core.schema.upstream,
- filter = function(upstream)
- upstream.has_domain = false
- if not upstream.value then
- return
- end
-
- for addr, _ in pairs(upstream.value.nodes or {}) do
- local host = core.utils.parse_addr(addr)
- if not core.utils.parse_ipv4(host) and
- not core.utils.parse_ipv6(host) then
- upstream.has_domain = true
- break
- end
- end
-
- core.log.info("filter upstream: ",
- core.json.delay_encode(upstream))
- end,
- })
- if not upstreams_etcd then
- error("failed to create etcd instance for fetching upstream: " .. err)
- return
- end
end
-
return _M
diff --git a/apisix/balancer/chash.lua b/apisix/balancer/chash.lua
new file mode 100644
index 0000000000000..38831cdb4e489
--- /dev/null
+++ b/apisix/balancer/chash.lua
@@ -0,0 +1,80 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+
+local core = require("apisix.core")
+local resty_chash = require("resty.chash")
+local str_char = string.char
+local str_gsub = string.gsub
+local pairs = pairs
+
+
+local _M = {}
+
+
+local function fetch_chash_hash_key(ctx, upstream)
+ local key = upstream.key
+ local hash_on = upstream.hash_on or "vars"
+ local chash_key
+
+ if hash_on == "consumer" then
+ chash_key = ctx.consumer_id
+ elseif hash_on == "vars" then
+ chash_key = ctx.var[key]
+ elseif hash_on == "header" then
+ chash_key = ctx.var["http_" .. key]
+ elseif hash_on == "cookie" then
+ chash_key = ctx.var["cookie_" .. key]
+ end
+
+ if not chash_key then
+ chash_key = ctx.var["remote_addr"]
+ core.log.warn("chash_key fetch is nil, use default chash_key ",
+ "remote_addr: ", chash_key)
+ end
+ core.log.info("upstream key: ", key)
+ core.log.info("hash_on: ", hash_on)
+ core.log.info("chash_key: ", core.json.delay_encode(chash_key))
+
+ return chash_key
+end
+
+
+function _M.new(up_nodes, upstream)
+ local str_null = str_char(0)
+
+ local servers, nodes = {}, {}
+ for serv, weight in pairs(up_nodes) do
+ local id = str_gsub(serv, ":", str_null)
+
+ servers[id] = serv
+ nodes[id] = weight
+ end
+
+ local picker = resty_chash:new(nodes)
+ return {
+ upstream = upstream,
+ get = function (ctx)
+ local chash_key = fetch_chash_hash_key(ctx, upstream)
+ local id = picker:find(chash_key)
+ -- core.log.warn("chash id: ", id, " val: ", servers[id])
+ return servers[id]
+ end
+ }
+end
+
+
+return _M
diff --git a/apisix/balancer/roundrobin.lua b/apisix/balancer/roundrobin.lua
new file mode 100644
index 0000000000000..dac4f03ea10d6
--- /dev/null
+++ b/apisix/balancer/roundrobin.lua
@@ -0,0 +1,34 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+
+local roundrobin = require("resty.roundrobin")
+
+local _M = {}
+
+
+function _M.new(up_nodes, upstream)
+ local picker = roundrobin:new(up_nodes)
+ return {
+ upstream = upstream,
+ get = function ()
+ return picker:find()
+ end
+ }
+end
+
+
+return _M
diff --git a/apisix/core/config_etcd.lua b/apisix/core/config_etcd.lua
index dd0a5487bac1f..fbb6df8dbf5d5 100644
--- a/apisix/core/config_etcd.lua
+++ b/apisix/core/config_etcd.lua
@@ -308,6 +308,7 @@ local function sync_data(self)
key = short_key(self, self.values[i].key)
self.values_hash[key] = i
end
+ self.sync_times = 0
end
self.conf_version = self.conf_version + 1
diff --git a/apisix/core/config_yaml.lua b/apisix/core/config_yaml.lua
index 7803deccf4e16..eed861fafaf35 100644
--- a/apisix/core/config_yaml.lua
+++ b/apisix/core/config_yaml.lua
@@ -58,7 +58,10 @@ local mt = {
local apisix_yaml
local apisix_yaml_ctime
-local function read_apisix_yaml(pre_mtime)
+local function read_apisix_yaml(premature, pre_mtime)
+ if premature then
+ return
+ end
local attributes, err = lfs.attributes(apisix_yaml_path)
if not attributes then
log.error("failed to fetch ", apisix_yaml_path, " attributes: ", err)
diff --git a/apisix/core/table.lua b/apisix/core/table.lua
index 0fc64acc34440..e666e162a2293 100644
--- a/apisix/core/table.lua
+++ b/apisix/core/table.lua
@@ -25,13 +25,14 @@ local type = type
local _M = {
- version = 0.1,
+ version = 0.2,
new = new_tab,
clear = require("table.clear"),
nkeys = nkeys,
insert = table.insert,
concat = table.concat,
clone = require("table.clone"),
+ isarray = require("table.isarray"),
}
@@ -84,5 +85,24 @@ local function deepcopy(orig)
end
_M.deepcopy = deepcopy
+local ngx_null = ngx.null
+local function merge(origin, extend)
+ for k,v in pairs(extend) do
+ if type(v) == "table" then
+ if type(origin[k] or false) == "table" then
+ merge(origin[k] or {}, extend[k] or {})
+ else
+ origin[k] = v
+ end
+ elseif v == ngx_null then
+ origin[k] = nil
+ else
+ origin[k] = v
+ end
+ end
+
+ return origin
+end
+_M.merge = merge
return _M
diff --git a/apisix/core/version.lua b/apisix/core/version.lua
index dfd10502979b8..c3606c206e533 100644
--- a/apisix/core/version.lua
+++ b/apisix/core/version.lua
@@ -15,5 +15,5 @@
-- limitations under the License.
--
return {
- VERSION = "1.2"
+ VERSION = "1.4"
}
diff --git a/apisix/discovery/eureka.lua b/apisix/discovery/eureka.lua
new file mode 100644
index 0000000000000..d4b4368536170
--- /dev/null
+++ b/apisix/discovery/eureka.lua
@@ -0,0 +1,253 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+
+local local_conf = require("apisix.core.config_local").local_conf()
+local http = require("resty.http")
+local core = require("apisix.core")
+local ipmatcher = require("resty.ipmatcher")
+local ipairs = ipairs
+local tostring = tostring
+local type = type
+local math_random = math.random
+local error = error
+local ngx = ngx
+local ngx_timer_at = ngx.timer.at
+local ngx_timer_every = ngx.timer.every
+local string_sub = string.sub
+local string_find = string.find
+local log = core.log
+
+local default_weight
+local applications
+
+local schema = {
+ type = "object",
+ properties = {
+ host = {
+ type = "array",
+ minItems = 1,
+ items = {
+ type = "string",
+ },
+ },
+ fetch_interval = {type = "integer", minimum = 1, default = 30},
+ prefix = {type = "string"},
+ weight = {type = "integer", minimum = 0},
+ timeout = {
+ type = "object",
+ properties = {
+ connect = {type = "integer", minimum = 1, default = 2000},
+ send = {type = "integer", minimum = 1, default = 2000},
+ read = {type = "integer", minimum = 1, default = 5000},
+ }
+ },
+ },
+ required = {"host"}
+}
+
+
+local _M = {
+ version = 0.1,
+}
+
+
+local function service_info()
+ local host = local_conf.eureka and local_conf.eureka.host
+ if not host then
+ log.error("do not set eureka.host")
+ return
+ end
+
+ local basic_auth
+ -- TODO Add health check to get healthy nodes.
+ local url = host[math_random(#host)]
+ local auth_idx = string_find(url, "@", 1, true)
+ if auth_idx then
+ local protocol_idx = string_find(url, "://", 1, true)
+ local protocol = string_sub(url, 1, protocol_idx + 2)
+ local user_and_password = string_sub(url, protocol_idx + 3, auth_idx - 1)
+ local other = string_sub(url, auth_idx + 1)
+ url = protocol .. other
+ basic_auth = "Basic " .. ngx.encode_base64(user_and_password)
+ end
+ if local_conf.eureka.prefix then
+ url = url .. local_conf.eureka.prefix
+ end
+ if string_sub(url, #url) ~= "/" then
+ url = url .. "/"
+ end
+
+ return url, basic_auth
+end
+
+
+local function request(request_uri, basic_auth, method, path, query, body)
+ log.info("eureka uri:", request_uri, ".")
+ local url = request_uri .. path
+ local headers = core.table.new(0, 5)
+ headers['Connection'] = 'Keep-Alive'
+ headers['Accept'] = 'application/json'
+
+ if basic_auth then
+ headers['Authorization'] = basic_auth
+ end
+
+ if body and 'table' == type(body) then
+ local err
+ body, err = core.json.encode(body)
+ if not body then
+ return nil, 'invalid body : ' .. err
+ end
+ -- log.warn(method, url, body)
+ headers['Content-Type'] = 'application/json'
+ end
+
+ local httpc = http.new()
+ local timeout = local_conf.eureka.timeout
+ local connect_timeout = timeout and timeout.connect or 2000
+ local send_timeout = timeout and timeout.send or 2000
+ local read_timeout = timeout and timeout.read or 5000
+ log.info("connect_timeout:", connect_timeout, ", send_timeout:", send_timeout,
+ ", read_timeout:", read_timeout, ".")
+ httpc:set_timeouts(connect_timeout, send_timeout, read_timeout)
+ return httpc:request_uri(url, {
+ version = 1.1,
+ method = method,
+ headers = headers,
+ query = query,
+ body = body,
+ ssl_verify = false,
+ })
+end
+
+
+local function parse_instance(instance)
+ local status = instance.status
+ local overridden_status = instance.overriddenstatus or instance.overriddenStatus
+ if overridden_status and overridden_status ~= "UNKNOWN" then
+ status = overridden_status
+ end
+
+ if status ~= "UP" then
+ return
+ end
+ local port
+ if tostring(instance.port["@enabled"]) == "true" and instance.port["$"] then
+ port = instance.port["$"]
+ -- secure = false
+ end
+ if tostring(instance.securePort["@enabled"]) == "true" and instance.securePort["$"] then
+ port = instance.securePort["$"]
+ -- secure = true
+ end
+ local ip = instance.ipAddr
+ if not ipmatcher.parse_ipv4(ip) and
+ not ipmatcher.parse_ipv6(ip) then
+ log.error(instance.app, " service ", instance.hostName, " node IP ", ip,
+ " is invalid(must be IPv4 or IPv6).")
+ return
+ end
+ return ip, port, instance.metadata
+end
+
+
+local function fetch_full_registry(premature)
+ if premature then
+ return
+ end
+
+ local request_uri, basic_auth = service_info()
+ if not request_uri then
+ return
+ end
+
+ local res, err = request(request_uri, basic_auth, "GET", "apps")
+ if not res then
+ log.error("failed to fetch registry", err)
+ return
+ end
+
+ if not res.body or res.status ~= 200 then
+ log.error("failed to fetch registry, status = ", res.status)
+ return
+ end
+
+ local json_str = res.body
+ local data, err = core.json.decode(json_str)
+ if not data then
+ log.error("invalid response body: ", json_str, " err: ", err)
+ return
+ end
+ local apps = data.applications.application
+ local up_apps = core.table.new(0, #apps)
+ for _, app in ipairs(apps) do
+ for _, instance in ipairs(app.instance) do
+ local ip, port, metadata = parse_instance(instance)
+ if ip and port then
+ local nodes = up_apps[app.name]
+ if not nodes then
+ nodes = core.table.new(#app.instance, 0)
+ up_apps[app.name] = nodes
+ end
+ core.table.insert(nodes, {
+ host = ip,
+ port = port,
+ weight = metadata and metadata.weight or default_weight,
+ metadata = metadata,
+ })
+ if metadata then
+ -- remove useless data
+ metadata.weight = nil
+ end
+ end
+ end
+ end
+ applications = up_apps
+end
+
+
+function _M.nodes(service_name)
+ if not applications then
+ log.error("failed to fetch nodes for : ", service_name)
+ return
+ end
+
+ return applications[service_name]
+end
+
+
+function _M.init_worker()
+ if not local_conf.eureka or not local_conf.eureka.host or #local_conf.eureka.host == 0 then
+ error("do not set eureka.host")
+ return
+ end
+
+ local ok, err = core.schema.check(schema, local_conf.eureka)
+ if not ok then
+ error("invalid eureka configuration: " .. err)
+ return
+ end
+ default_weight = local_conf.eureka.weight or 100
+ log.info("default_weight:", default_weight, ".")
+ local fetch_interval = local_conf.eureka.fetch_interval or 30
+ log.info("fetch_interval:", fetch_interval, ".")
+ ngx_timer_at(0, fetch_full_registry)
+ ngx_timer_every(fetch_interval, fetch_full_registry)
+end
+
+
+return _M
diff --git a/apisix/discovery/init.lua b/apisix/discovery/init.lua
new file mode 100644
index 0000000000000..16aafe62c50d5
--- /dev/null
+++ b/apisix/discovery/init.lua
@@ -0,0 +1,33 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+
+local log = require("apisix.core.log")
+local local_conf = require("apisix.core.config_local").local_conf()
+
+local discovery_type = local_conf.apisix and local_conf.apisix.discovery
+local discovery
+
+if discovery_type then
+ log.info("use discovery: ", discovery_type)
+ discovery = require("apisix.discovery." .. discovery_type)
+end
+
+
+return {
+ version = 0.1,
+ discovery = discovery
+}
diff --git a/apisix/http/router/radixtree_sni.lua b/apisix/http/router/radixtree_sni.lua
index 0ecc3bf3cbb2a..1eb7aa54508b4 100644
--- a/apisix/http/router/radixtree_sni.lua
+++ b/apisix/http/router/radixtree_sni.lua
@@ -18,9 +18,14 @@ local get_request = require("resty.core.base").get_request
local radixtree_new = require("resty.radixtree").new
local core = require("apisix.core")
local ngx_ssl = require("ngx.ssl")
-local ipairs = ipairs
+local config_util = require("apisix.core.config_util")
+local ipairs = ipairs
local type = type
local error = error
+local str_find = string.find
+local aes = require "resty.aes"
+local assert = assert
+local ngx_decode_base64 = ngx.decode_base64
local ssl_certificates
local radixtree_router
local radixtree_router_ver
@@ -38,9 +43,44 @@ local function create_router(ssl_items)
local route_items = core.table.new(#ssl_items, 0)
local idx = 0
- for _, ssl in ipairs(ssl_items) do
- if type(ssl) == "table" then
- local sni = ssl.value.sni:reverse()
+ local local_conf = core.config.local_conf()
+ local iv
+ if local_conf and local_conf.apisix
+ and local_conf.apisix.ssl
+ and local_conf.apisix.ssl.key_encrypt_salt then
+ iv = local_conf.apisix.ssl.key_encrypt_salt
+ end
+ local aes_128_cbc_with_iv = (type(iv)=="string" and #iv == 16) and
+ assert(aes:new(iv, nil, aes.cipher(128, "cbc"), {iv=iv})) or nil
+
+ for _, ssl in config_util.iterate_values(ssl_items) do
+ if ssl.value ~= nil and
+ (ssl.value.status == nil or ssl.value.status == 1) then -- compatible with old version
+
+ local j = 0
+ local sni
+ if type(ssl.value.snis) == "table" and #ssl.value.snis > 0 then
+ sni = core.table.new(0, #ssl.value.snis)
+ for _, s in ipairs(ssl.value.snis) do
+ j = j + 1
+ sni[j] = s:reverse()
+ end
+ else
+ sni = ssl.value.sni:reverse()
+ end
+
+ -- decrypt private key
+ if aes_128_cbc_with_iv ~= nil and
+ not str_find(ssl.value.key, "---") then
+ local decrypted = aes_128_cbc_with_iv:decrypt(ngx_decode_base64(ssl.value.key))
+ if decrypted == nil then
+ core.log.error("decrypt ssl key failed. key[", ssl.value.key, "] ")
+ else
+ ssl.value.key = decrypted
+ end
+ end
+
+ local
idx = idx + 1
route_items[idx] = {
paths = sni,
@@ -49,6 +89,7 @@ local function create_router(ssl_items)
return
end
api_ctx.matched_ssl = ssl
+ api_ctx.matched_sni = sni
end
}
end
@@ -114,14 +155,37 @@ function _M.match_and_set(api_ctx)
end
core.log.debug("sni: ", sni)
- local ok = radixtree_router:dispatch(sni:reverse(), nil, api_ctx)
+
+ local sni_rev = sni:reverse()
+ local ok = radixtree_router:dispatch(sni_rev, nil, api_ctx)
if not ok then
core.log.warn("not found any valid sni configuration")
return false
end
+
+ if type(api_ctx.matched_sni) == "table" then
+ local matched = false
+ for _, msni in ipairs(api_ctx.matched_sni) do
+ if sni_rev == msni or not str_find(sni_rev, ".", #msni, true) then
+ matched = true
+ end
+ end
+ if not matched then
+ core.log.warn("not found any valid sni configuration, matched sni: ",
+ core.json.delay_encode(api_ctx.matched_sni, true), " current sni: ", sni)
+ return false
+ end
+ else
+ if str_find(sni_rev, ".", #api_ctx.matched_sni, true) then
+ core.log.warn("not found any valid sni configuration, matched sni: ",
+ api_ctx.matched_sni:reverse(), " current sni: ", sni)
+ return false
+ end
+ end
+
local matched_ssl = api_ctx.matched_ssl
- core.log.info("debug: ", core.json.delay_encode(matched_ssl, true))
+ core.log.info("debug - matched: ", core.json.delay_encode(matched_ssl, true))
ok, err = set_pem_ssl_key(matched_ssl.value.cert, matched_ssl.value.key)
if not ok then
return false, err
@@ -135,7 +199,7 @@ function _M.init_worker()
local err
ssl_certificates, err = core.config.new("/ssl", {
automatic = true,
- item_schema = core.schema.ssl
+ item_schema = core.schema.ssl,
})
if not ssl_certificates then
error("failed to create etcd instance for fetching ssl certificates: "
diff --git a/apisix/http/service.lua b/apisix/http/service.lua
index 42d31dd58b3c0..161d82fe23588 100644
--- a/apisix/http/service.lua
+++ b/apisix/http/service.lua
@@ -14,7 +14,8 @@
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
-local core = require("apisix.core")
+local core = require("apisix.core")
+local ipairs = ipairs
local services
local error = error
local pairs = pairs
@@ -45,17 +46,36 @@ local function filter(service)
return
end
- if not service.value.upstream then
+ if not service.value.upstream or not service.value.upstream.nodes then
return
end
- for addr, _ in pairs(service.value.upstream.nodes or {}) do
- local host = core.utils.parse_addr(addr)
- if not core.utils.parse_ipv4(host) and
- not core.utils.parse_ipv6(host) then
- service.has_domain = true
- break
+ local nodes = service.value.upstream.nodes
+ if core.table.isarray(nodes) then
+ for _, node in ipairs(nodes) do
+ local host = node.host
+ if not core.utils.parse_ipv4(host) and
+ not core.utils.parse_ipv6(host) then
+ service.has_domain = true
+ break
+ end
end
+ else
+ local new_nodes = core.table.new(core.table.nkeys(nodes), 0)
+ for addr, weight in pairs(nodes) do
+ local host, port = core.utils.parse_addr(addr)
+ if not core.utils.parse_ipv4(host) and
+ not core.utils.parse_ipv6(host) then
+ service.has_domain = true
+ end
+ local node = {
+ host = host,
+ port = port,
+ weight = weight,
+ }
+ core.table.insert(new_nodes, node)
+ end
+ service.value.upstream.nodes = new_nodes
end
core.log.info("filter service: ", core.json.delay_encode(service))
diff --git a/apisix/init.lua b/apisix/init.lua
index 48c5b8098bdf9..41295f095dd6c 100644
--- a/apisix/init.lua
+++ b/apisix/init.lua
@@ -22,13 +22,14 @@ local service_fetch = require("apisix.http.service").get
local admin_init = require("apisix.admin.init")
local get_var = require("resty.ngxvar").fetch
local router = require("apisix.router")
+local set_upstream = require("apisix.upstream").set_by_route
local ipmatcher = require("resty.ipmatcher")
local ngx = ngx
local get_method = ngx.req.get_method
local ngx_exit = ngx.exit
local math = math
local error = error
-local pairs = pairs
+local ipairs = ipairs
local tostring = tostring
local load_balancer
@@ -42,7 +43,7 @@ local function parse_args(args)
end
-local _M = {version = 0.3}
+local _M = {version = 0.4}
function _M.http_init(args)
@@ -74,7 +75,10 @@ function _M.http_init_worker()
if not ok then
error("failed to init worker event: " .. err)
end
-
+ local discovery = require("apisix.discovery.init").discovery
+ if discovery and discovery.init_worker then
+ discovery.init_worker()
+ end
require("apisix.balancer").init_worker()
load_balancer = require("apisix.balancer").run
require("apisix.admin.init").init_worker()
@@ -89,6 +93,7 @@ function _M.http_init_worker()
end
require("apisix.debug").init_worker()
+ require("apisix.upstream").init_worker()
local local_conf = core.config.local_conf()
local dns_resolver_valid = local_conf and local_conf.apisix and
@@ -107,30 +112,7 @@ local function run_plugin(phase, plugins, api_ctx)
end
plugins = plugins or api_ctx.plugins
- if not plugins then
- return api_ctx
- end
-
- if phase == "balancer" then
- local balancer_name = api_ctx.balancer_name
- local balancer_plugin = api_ctx.balancer_plugin
- if balancer_name and balancer_plugin then
- local phase_fun = balancer_plugin[phase]
- phase_fun(balancer_plugin, api_ctx)
- return api_ctx
- end
-
- for i = 1, #plugins, 2 do
- local phase_fun = plugins[i][phase]
- if phase_fun and
- (not balancer_name or balancer_name == plugins[i].name) then
- phase_fun(plugins[i + 1], api_ctx)
- if api_ctx.balancer_name == plugins[i].name then
- api_ctx.balancer_plugin = plugins[i]
- return api_ctx
- end
- end
- end
+ if not plugins or #plugins == 0 then
return api_ctx
end
@@ -179,71 +161,71 @@ function _M.http_ssl_phase()
end
-local function parse_domain_in_up(up, ver)
- local new_nodes = core.table.new(0, 8)
- for addr, weight in pairs(up.value.nodes) do
- local host, port = core.utils.parse_addr(addr)
+local function parse_domain(host)
+ local ip_info, err = core.utils.dns_parse(dns_resolver, host)
+ if not ip_info then
+ core.log.error("failed to parse domain for ", host, ", error:",err)
+ return nil, err
+ end
+
+ core.log.info("parse addr: ", core.json.delay_encode(ip_info))
+ core.log.info("resolver: ", core.json.delay_encode(dns_resolver))
+ core.log.info("host: ", host)
+ if ip_info.address then
+ core.log.info("dns resolver domain: ", host, " to ", ip_info.address)
+ return ip_info.address
+ else
+ return nil, "failed to parse domain"
+ end
+end
+
+
+local function parse_domain_for_nodes(nodes)
+ local new_nodes = core.table.new(#nodes, 0)
+ for _, node in ipairs(nodes) do
+ local host = node.host
if not ipmatcher.parse_ipv4(host) and
- not ipmatcher.parse_ipv6(host) then
- local ip_info, err = core.utils.dns_parse(dns_resolver, host)
- if not ip_info then
- return nil, err
+ not ipmatcher.parse_ipv6(host) then
+ local ip, err = parse_domain(host)
+ if ip then
+ local new_node = core.table.clone(node)
+ new_node.host = ip
+ core.table.insert(new_nodes, new_node)
end
- core.log.info("parse addr: ", core.json.delay_encode(ip_info))
- core.log.info("resolver: ", core.json.delay_encode(dns_resolver))
- core.log.info("host: ", host)
- if ip_info.address then
- new_nodes[ip_info.address .. ":" .. port] = weight
- core.log.info("dns resolver domain: ", host, " to ",
- ip_info.address)
- else
- return nil, "failed to parse domain in route"
+ if err then
+ return nil, err
end
else
- new_nodes[addr] = weight
+ core.table.insert(new_nodes, node)
end
end
+ return new_nodes
+end
+
+local function parse_domain_in_up(up, ver)
+ local nodes = up.value.nodes
+ local new_nodes, err = parse_domain_for_nodes(nodes)
+ if not new_nodes then
+ return nil, err
+ end
up.dns_value = core.table.clone(up.value)
up.dns_value.nodes = new_nodes
- core.log.info("parse upstream which contain domain: ",
- core.json.delay_encode(up))
+ core.log.info("parse upstream which contain domain: ", core.json.delay_encode(up))
return up
end
local function parse_domain_in_route(route, ver)
- local new_nodes = core.table.new(0, 8)
- for addr, weight in pairs(route.value.upstream.nodes) do
- local host, port = core.utils.parse_addr(addr)
- if not ipmatcher.parse_ipv4(host) and
- not ipmatcher.parse_ipv6(host) then
- local ip_info, err = core.utils.dns_parse(dns_resolver, host)
- if not ip_info then
- return nil, err
- end
-
- core.log.info("parse addr: ", core.json.delay_encode(ip_info))
- core.log.info("resolver: ", core.json.delay_encode(dns_resolver))
- core.log.info("host: ", host)
- if ip_info and ip_info.address then
- new_nodes[ip_info.address .. ":" .. port] = weight
- core.log.info("dns resolver domain: ", host, " to ",
- ip_info.address)
- else
- return nil, "failed to parse domain in route"
- end
-
- else
- new_nodes[addr] = weight
- end
+ local nodes = route.value.upstream.nodes
+ local new_nodes, err = parse_domain_for_nodes(nodes)
+ if not new_nodes then
+ return nil, err
end
-
route.dns_value = core.table.deepcopy(route.value)
route.dns_value.upstream.nodes = new_nodes
- core.log.info("parse route which contain domain: ",
- core.json.delay_encode(route))
+ core.log.info("parse route which contain domain: ", core.json.delay_encode(route))
return route
end
@@ -280,6 +262,8 @@ function _M.http_access_phase()
api_ctx.conf_type = nil
api_ctx.conf_version = nil
api_ctx.conf_id = nil
+
+ api_ctx.global_rules = router.global_rules
end
router.router_http.match(api_ctx)
@@ -380,6 +364,12 @@ function _M.http_access_phase()
end
end
run_plugin("access", plugins, api_ctx)
+
+ local ok, err = set_upstream(route, api_ctx)
+ if not ok then
+ core.log.error("failed to parse upstream: ", err)
+ core.response.exit(500)
+ end
end
@@ -440,38 +430,43 @@ function _M.grpc_access_phase()
run_plugin("rewrite", plugins, api_ctx)
run_plugin("access", plugins, api_ctx)
+
+ set_upstream(route, api_ctx)
end
-local function common_phase(plugin_name)
+
+local function common_phase(phase_name)
local api_ctx = ngx.ctx.api_ctx
if not api_ctx then
return
end
- if router.global_rules and router.global_rules.values
- and #router.global_rules.values > 0
- then
+ if api_ctx.global_rules then
local plugins = core.tablepool.fetch("plugins", 32, 0)
- local values = router.global_rules.values
+ local values = api_ctx.global_rules.values
for _, global_rule in config_util.iterate_values(values) do
core.table.clear(plugins)
plugins = plugin.filter(global_rule, plugins)
- run_plugin(plugin_name, plugins, api_ctx)
+ run_plugin(phase_name, plugins, api_ctx)
end
core.tablepool.release("plugins", plugins)
end
- run_plugin(plugin_name, nil, api_ctx)
+
+ run_plugin(phase_name, nil, api_ctx)
return api_ctx
end
+
function _M.http_header_filter_phase()
common_phase("header_filter")
end
+
function _M.http_body_filter_phase()
common_phase("body_filter")
end
+
function _M.http_log_phase()
local api_ctx = common_phase("log")
@@ -488,6 +483,7 @@ function _M.http_log_phase()
core.tablepool.release("api_ctx", api_ctx)
end
+
function _M.http_balancer_phase()
local api_ctx = ngx.ctx.api_ctx
if not api_ctx then
@@ -495,22 +491,10 @@ function _M.http_balancer_phase()
return core.response.exit(500)
end
- -- first time
- if not api_ctx.balancer_name then
- run_plugin("balancer", nil, api_ctx)
- if api_ctx.balancer_name then
- return
- end
- end
-
- if api_ctx.balancer_name and api_ctx.balancer_name ~= "default" then
- return run_plugin("balancer", nil, api_ctx)
- end
-
- api_ctx.balancer_name = "default"
load_balancer(api_ctx.matched_route, api_ctx)
end
+
local function cors_admin()
local local_conf = core.config.local_conf()
if local_conf.apisix and not local_conf.apisix.enable_admin_cors then
@@ -536,6 +520,10 @@ local function cors_admin()
"Access-Control-Max-Age", "3600")
end
+local function add_content_type()
+ core.response.set_header("Content-Type", "application/json")
+end
+
do
local router
@@ -547,6 +535,9 @@ function _M.http_admin()
-- add cors rsp header
cors_admin()
+ -- add content type to rsp header
+ add_content_type()
+
-- core.log.info("uri: ", get_var("uri"), " method: ", get_method())
local ok = router:dispatch(get_var("uri"), {method = get_method()})
if not ok then
@@ -606,7 +597,13 @@ function _M.stream_preread_phase()
api_ctx.plugins = plugin.stream_filter(matched_route, plugins)
-- core.log.info("valid plugins: ", core.json.delay_encode(plugins, true))
+ api_ctx.conf_type = "stream/route"
+ api_ctx.conf_version = matched_route.modifiedIndex
+ api_ctx.conf_id = matched_route.value.id
+
run_plugin("preread", plugins, api_ctx)
+
+ set_upstream(matched_route, api_ctx)
end
@@ -618,19 +615,6 @@ function _M.stream_balancer_phase()
return ngx_exit(1)
end
- -- first time
- if not api_ctx.balancer_name then
- run_plugin("balancer", nil, api_ctx)
- if api_ctx.balancer_name then
- return
- end
- end
-
- if api_ctx.balancer_name and api_ctx.balancer_name ~= "default" then
- return run_plugin("balancer", nil, api_ctx)
- end
-
- api_ctx.balancer_name = "default"
load_balancer(api_ctx.matched_route, api_ctx)
end
diff --git a/apisix/plugin.lua b/apisix/plugin.lua
index 8186d155af615..075d0584358be 100644
--- a/apisix/plugin.lua
+++ b/apisix/plugin.lua
@@ -235,7 +235,8 @@ end
function _M.filter(user_route, plugins)
plugins = plugins or core.table.new(#local_plugins * 2, 0)
local user_plugin_conf = user_route.value.plugins
- if user_plugin_conf == nil then
+ if user_plugin_conf == nil or
+ core.table.nkeys(user_plugin_conf) == 0 then
if local_conf and local_conf.apisix.enable_debug then
core.response.set_header("Apisix-Plugins", "no plugin")
end
diff --git a/apisix/plugins/authz-keycloak.lua b/apisix/plugins/authz-keycloak.lua
new file mode 100644
index 0000000000000..2704f4ef03563
--- /dev/null
+++ b/apisix/plugins/authz-keycloak.lua
@@ -0,0 +1,165 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local http = require "resty.http"
+local sub_str = string.sub
+local url = require "net.url"
+local tostring = tostring
+local ngx = ngx
+local plugin_name = "authz-keycloak"
+
+
+local schema = {
+ type = "object",
+ properties = {
+ token_endpoint = {type = "string", minLength = 1, maxLength = 4096},
+ permissions = {
+ type = "array",
+ items = {
+ type = "string",
+ minLength = 1, maxLength = 100
+ },
+ uniqueItems = true
+ },
+ grant_type = {
+ type = "string",
+ default="urn:ietf:params:oauth:grant-type:uma-ticket",
+ enum = {"urn:ietf:params:oauth:grant-type:uma-ticket"},
+ minLength = 1, maxLength = 100
+ },
+ audience = {type = "string", minLength = 1, maxLength = 100},
+ timeout = {type = "integer", minimum = 1000, default = 3000},
+ policy_enforcement_mode = {
+ type = "string",
+ enum = {"ENFORCING", "PERMISSIVE"},
+ default = "ENFORCING"
+ },
+ keepalive = {type = "boolean", default = true},
+ keepalive_timeout = {type = "integer", minimum = 1000, default = 60000},
+ keepalive_pool = {type = "integer", minimum = 1, default = 5},
+
+ },
+ required = {"token_endpoint"}
+}
+
+
+local _M = {
+ version = 0.1,
+ priority = 2000,
+ type = 'auth',
+ name = plugin_name,
+ schema = schema,
+}
+
+function _M.check_schema(conf)
+ return core.schema.check(schema, conf)
+end
+
+local function is_path_protected(conf)
+ -- TODO if permissions are empty lazy load paths from Keycloak
+ if conf.permissions == nil then
+ return false
+ end
+ return true
+end
+
+
+local function evaluate_permissions(conf, token)
+ local url_decoded = url.parse(conf.token_endpoint)
+ local host = url_decoded.host
+ local port = url_decoded.port
+
+ if not port then
+ if url_decoded.scheme == "https" then
+ port = 443
+ else
+ port = 80
+ end
+ end
+
+ if not is_path_protected(conf) and conf.policy_enforcement_mode == "ENFORCING" then
+ core.response.exit(403)
+ return
+ end
+
+ local httpc = http.new()
+ httpc:set_timeout(conf.timeout)
+
+ local params = {
+ method = "POST",
+ body = ngx.encode_args({
+ grant_type = conf.grant_type,
+ audience = conf.audience,
+ response_mode = "decision",
+ permission = conf.permissions
+ }),
+ headers = {
+ ["Content-Type"] = "application/x-www-form-urlencoded",
+ ["Authorization"] = token
+ }
+ }
+
+ if conf.keepalive then
+ params.keepalive_timeout = conf.keepalive_timeout
+ params.keepalive_pool = conf.keepalive_pool
+ else
+ params.keepalive = conf.keepalive
+ end
+
+ local httpc_res, httpc_err = httpc:request_uri(conf.token_endpoint, params)
+
+ if not httpc_res then
+ core.log.error("error while sending authz request to [", host ,"] port[",
+ tostring(port), "] ", httpc_err)
+ core.response.exit(500, httpc_err)
+ return
+ end
+
+ if httpc_res.status >= 400 then
+ core.log.error("status code: ", httpc_res.status, " msg: ", httpc_res.body)
+ core.response.exit(httpc_res.status, httpc_res.body)
+ end
+end
+
+
+local function fetch_jwt_token(ctx)
+ local token = core.request.header(ctx, "authorization")
+ if not token then
+ return nil, "authorization header not available"
+ end
+
+ local prefix = sub_str(token, 1, 7)
+ if prefix ~= 'Bearer ' and prefix ~= 'bearer ' then
+ return "Bearer " .. token
+ end
+ return token
+end
+
+
+function _M.rewrite(conf, ctx)
+ core.log.debug("hit keycloak-auth rewrite")
+ local jwt_token, err = fetch_jwt_token(ctx)
+ if not jwt_token then
+ core.log.error("failed to fetch JWT token: ", err)
+ return 401, {message = "Missing JWT token in request"}
+ end
+
+ evaluate_permissions(conf, jwt_token)
+end
+
+
+return _M
diff --git a/apisix/plugins/batch-requests.lua b/apisix/plugins/batch-requests.lua
index 34d784a89f970..71878218d8d7e 100644
--- a/apisix/plugins/batch-requests.lua
+++ b/apisix/plugins/batch-requests.lua
@@ -14,12 +14,15 @@
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
-local core = require("apisix.core")
-local http = require("resty.http")
-local ngx = ngx
-local io_open = io.open
-local ipairs = ipairs
-local pairs = pairs
+local core = require("apisix.core")
+local http = require("resty.http")
+local ngx = ngx
+local io_open = io.open
+local ipairs = ipairs
+local pairs = pairs
+local str_find = string.find
+local str_lower = string.lower
+
local plugin_name = "batch-requests"
@@ -112,18 +115,42 @@ local function check_input(data)
end
end
+local function lowercase_key_or_init(obj)
+ if not obj then
+ return {}
+ end
-local function set_common_header(data)
- if not data.headers then
- return
+ local lowercase_key_obj = {}
+ for k, v in pairs(obj) do
+ lowercase_key_obj[str_lower(k)] = v
end
+ return lowercase_key_obj
+end
+
+local function ensure_header_lowercase(data)
+ data.headers = lowercase_key_or_init(data.headers)
+
for i,req in ipairs(data.pipeline) do
- if not req.headers then
- req.headers = data.headers
- else
- for k, v in pairs(data.headers) do
- if not req.headers[k] then
+ req.headers = lowercase_key_or_init(req.headers)
+ end
+end
+
+
+local function set_common_header(data)
+ local outer_headers = core.request.headers(nil)
+ for i,req in ipairs(data.pipeline) do
+ for k, v in pairs(data.headers) do
+ if not req.headers[k] then
+ req.headers[k] = v
+ end
+ end
+
+ if outer_headers then
+ for k, v in pairs(outer_headers) do
+ local is_content_header = str_find(k, "content-", 1, true) == 1
+ -- skip header start with "content-"
+ if not req.headers[k] and not is_content_header then
req.headers[k] = v
end
end
@@ -198,8 +225,10 @@ local function batch_requests()
core.response.exit(500, {error_msg = "connect to apisix failed: " .. err})
end
+ ensure_header_lowercase(data)
set_common_header(data)
set_common_query(data)
+
local responses, err = httpc:request_pipeline(data.pipeline)
if not responses then
core.response.exit(400, {error_msg = "request failed: " .. err})
diff --git a/apisix/plugins/consumer-restriction.lua b/apisix/plugins/consumer-restriction.lua
new file mode 100644
index 0000000000000..912e2129a8cc2
--- /dev/null
+++ b/apisix/plugins/consumer-restriction.lua
@@ -0,0 +1,94 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local ipairs = ipairs
+local core = require("apisix.core")
+
+local schema = {
+ type = "object",
+ properties = {
+ whitelist = {
+ type = "array",
+ items = {type = "string"},
+ minItems = 1
+ },
+ blacklist = {
+ type = "array",
+ items = {type = "string"},
+ minItems = 1
+ }
+ },
+ oneOf = {
+ {required = {"whitelist"}},
+ {required = {"blacklist"}}
+ }
+}
+
+
+local plugin_name = "consumer-restriction"
+
+
+local _M = {
+ version = 0.1,
+ priority = 2400,
+ name = plugin_name,
+ schema = schema,
+}
+
+local function is_include(value, tab)
+ for k,v in ipairs(tab) do
+ if v == value then
+ return true
+ end
+ end
+ return false
+end
+
+function _M.check_schema(conf)
+ local ok, err = core.schema.check(schema, conf)
+
+ if not ok then
+ return false, err
+ end
+
+ return true
+end
+
+function _M.access(conf, ctx)
+ if not ctx.consumer then
+ return 401, { message = "Missing authentication or identity verification." }
+ end
+
+ local block = false
+ if conf.blacklist and #conf.blacklist > 0 then
+ if is_include(ctx.consumer.username, conf.blacklist) then
+ block = true
+ end
+ end
+
+ if conf.whitelist and #conf.whitelist > 0 then
+ if not is_include(ctx.consumer.username, conf.whitelist) then
+ block = true
+ end
+ end
+
+ if block then
+ return 403, { message = "The consumer is not allowed" }
+ end
+end
+
+
+return _M
diff --git a/apisix/plugins/cors.lua b/apisix/plugins/cors.lua
index b64010e299771..9fe91c94dd582 100644
--- a/apisix/plugins/cors.lua
+++ b/apisix/plugins/cors.lua
@@ -71,11 +71,34 @@ local schema = {
local _M = {
version = 0.1,
priority = 4000,
- type = 'auth',
name = plugin_name,
schema = schema,
}
+local function create_mutiple_origin_cache(conf)
+ if not str_find(conf.allow_origins, ",", 1, true) then
+ return nil
+ end
+ local origin_cache = {}
+ local iterator, err = re_gmatch(conf.allow_origins, "([^,]+)", "jiox")
+ if not iterator then
+ core.log.error("match origins failed: ", err)
+ return nil
+ end
+ while true do
+ local origin, err = iterator()
+ if err then
+ core.log.error("iterate origins failed: ", err)
+ return nil
+ end
+ if not origin then
+ break
+ end
+ origin_cache[origin[0]] = true
+ end
+ return origin_cache
+end
+
function _M.check_schema(conf)
local ok, err = core.schema.check(schema, conf)
if not ok then
@@ -85,63 +108,48 @@ function _M.check_schema(conf)
return true
end
-function _M.access(conf, ctx)
+local function set_cors_headers(conf, ctx)
+ local allow_methods = conf.allow_methods
+ if allow_methods == "**" then
+ allow_methods = "GET,POST,PUT,DELETE,PATCH,HEAD,OPTIONS,CONNECT,TRACE"
+ end
+
+ ngx.header["Access-Control-Allow-Origin"] = ctx.cors_allow_origins
+ ngx.header["Access-Control-Allow-Methods"] = allow_methods
+ ngx.header["Access-Control-Allow-Headers"] = conf.allow_headers
+ ngx.header["Access-Control-Max-Age"] = conf.max_age
+ if conf.allow_credential then
+ ngx.header["Access-Control-Allow-Credentials"] = true
+ end
+ ngx.header["Access-Control-Expose-Headers"] = conf.expose_headers
+end
+
+function _M.rewrite(conf, ctx)
local allow_origins = conf.allow_origins
+ local req_origin = core.request.header(ctx, "Origin")
if allow_origins == "**" then
- allow_origins = ngx.var.http_origin or '*'
+ allow_origins = req_origin or '*'
+ end
+ local multiple_origin, err = core.lrucache.plugin_ctx(plugin_name, ctx,
+ create_mutiple_origin_cache, conf)
+ if err then
+ return 500, {message = "get mutiple origin cache failed: " .. err}
end
- if str_find(allow_origins, ",", 1, true) then
- local finded = false
- local iterator, err = re_gmatch(allow_origins, "([^,]+)", "jiox")
- if not iterator then
- return 500, {message = "match origins failed", error = err}
- end
- while true do
- local origin, err = iterator()
- if err then
- return 500, {message = "iterate origins failed", error = err}
- end
- if not origin then
- break
- end
- if origin[0] == ngx.var.http_origin then
- allow_origins = origin[0]
- finded = true
- break
- end
- end
- if not finded then
+ if multiple_origin then
+ if multiple_origin[req_origin] then
+ allow_origins = req_origin
+ else
return
end
end
ctx.cors_allow_origins = allow_origins
+ set_cors_headers(conf, ctx)
if ctx.var.request_method == "OPTIONS" then
return 200
end
end
-function _M.header_filter(conf, ctx)
- if not ctx.cors_allow_origins then
- -- no origin matched, don't add headers
- return
- end
-
- local allow_methods = conf.allow_methods
- if allow_methods == "**" then
- allow_methods = "GET,POST,PUT,DELETE,PATCH,HEAD,OPTIONS,CONNECT,TRACE"
- end
-
- ngx.header["Access-Control-Allow-Origin"] = ctx.cors_allow_origins
- ngx.header["Access-Control-Allow-Methods"] = allow_methods
- ngx.header["Access-Control-Allow-Headers"] = conf.allow_headers
- ngx.header["Access-Control-Expose-Headers"] = conf.expose_headers
- ngx.header["Access-Control-Max-Age"] = conf.max_age
- if conf.allow_credential then
- ngx.header["Access-Control-Allow-Credentials"] = true
- end
-end
-
return _M
diff --git a/apisix/plugins/echo.lua b/apisix/plugins/echo.lua
new file mode 100644
index 0000000000000..76ab4e57e5c62
--- /dev/null
+++ b/apisix/plugins/echo.lua
@@ -0,0 +1,119 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local pairs = pairs
+local type = type
+local ngx = ngx
+
+
+local schema = {
+ type = "object",
+ properties = {
+ before_body = {
+ description = "body before the filter phase.",
+ type = "string"
+ },
+ body = {
+ description = "body to replace upstream response.",
+ type = "string"
+ },
+ after_body = {
+ description = "body after the modification of filter phase.",
+ type = "string"
+ },
+ headers = {
+ description = "new headers for repsonse",
+ type = "object",
+ minProperties = 1,
+ },
+ auth_value = {
+ description = "auth value",
+ type = "string"
+ }
+ },
+ anyOf = {
+ {required = {"before_body"}},
+ {required = {"body"}},
+ {required = {"after_body"}}
+ },
+ minProperties = 1
+}
+
+local plugin_name = "echo"
+
+local _M = {
+ version = 0.1,
+ priority = 412,
+ name = plugin_name,
+ schema = schema,
+}
+
+function _M.check_schema(conf)
+ if conf.headers then
+ conf.headers_arr = {}
+
+ for field, value in pairs(conf.headers) do
+ if type(field) == 'string'
+ and (type(value) == 'string' or type(value) == 'number') then
+ if #field == 0 then
+ return false, 'invalid field length in header'
+ end
+ core.table.insert(conf.headers_arr, field)
+ core.table.insert(conf.headers_arr, value)
+ else
+ return false, 'invalid type as header value'
+ end
+ end
+ end
+
+ return core.schema.check(schema, conf)
+end
+
+function _M.body_filter(conf, ctx)
+ if conf.body then
+ ngx.arg[1] = conf.body
+ end
+
+ if conf.before_body then
+ ngx.arg[1] = conf.before_body .. ngx.arg[1]
+ end
+
+ if conf.after_body then
+ ngx.arg[1] = ngx.arg[1] .. conf.after_body
+ end
+ ngx.arg[2] = true
+end
+
+function _M.access(conf, ctx)
+ local value = core.request.header(ctx, "Authorization")
+
+ if value ~= conf.auth_value then
+ return 401, "unauthorized body"
+ end
+
+end
+
+function _M.header_filter(conf, ctx)
+ if conf.headers_arr then
+ local field_cnt = #conf.headers_arr
+ for i = 1, field_cnt, 2 do
+ ngx.header[conf.headers_arr[i]] = conf.headers_arr[i+1]
+ end
+ end
+end
+
+return _M
diff --git a/apisix/plugins/example-plugin.lua b/apisix/plugins/example-plugin.lua
index 025ade4fd8e06..bf36837985110 100644
--- a/apisix/plugins/example-plugin.lua
+++ b/apisix/plugins/example-plugin.lua
@@ -15,7 +15,7 @@
-- limitations under the License.
--
local core = require("apisix.core")
-local balancer = require("ngx.balancer")
+local upstream = require("apisix.upstream")
local schema = {
type = "object",
@@ -60,25 +60,27 @@ end
function _M.access(conf, ctx)
core.log.warn("plugin access phase, conf: ", core.json.encode(conf))
-- return 200, {message = "hit example plugin"}
-end
-
-
-function _M.balancer(conf, ctx)
- core.log.warn("plugin balancer phase, conf: ", core.json.encode(conf))
if not conf.ip then
return
end
- -- NOTE: update `ctx.balancer_name` is important, APISIX will skip other
- -- balancer handler.
- ctx.balancer_name = plugin_name
+ local up_conf = {
+ type = "roundrobin",
+ nodes = {
+ {host = conf.ip, port = conf.port, weight = 1}
+ }
+ }
- local ok, err = balancer.set_current_peer(conf.ip, conf.port)
+ local ok, err = upstream.check_schema(up_conf)
if not ok then
- core.log.error("failed to set server peer: ", err)
- return core.response.exit(502)
+ return 500, err
end
+
+ local matched_route = ctx.matched_route
+ upstream.set(ctx, up_conf.type .. "#route_" .. matched_route.value.id,
+ ctx.conf_version, up_conf, matched_route)
+ return
end
diff --git a/apisix/plugins/grpc-transcode/util.lua b/apisix/plugins/grpc-transcode/util.lua
index 83d89abaf2a01..d705a1ed7126e 100644
--- a/apisix/plugins/grpc-transcode/util.lua
+++ b/apisix/plugins/grpc-transcode/util.lua
@@ -51,7 +51,7 @@ local function get_from_request(name, kind)
local request_table
if ngx.req.get_method() == "POST" then
if string.find(ngx.req.get_headers()["Content-Type"] or "",
- "application/json", true) then
+ "application/json", 1, true) then
request_table = json.decode(ngx.req.get_body_data())
else
request_table = ngx.req.get_post_args()
diff --git a/apisix/plugins/heartbeat.lua b/apisix/plugins/heartbeat.lua
index 0a6cf76cbdc53..ed4fa2c208a21 100644
--- a/apisix/plugins/heartbeat.lua
+++ b/apisix/plugins/heartbeat.lua
@@ -114,7 +114,7 @@ local function report()
local res
res, err = request_apisix_svr(args)
if not res then
- core.log.error("failed to report heartbeat information: ", err)
+ core.log.info("failed to report heartbeat information: ", err)
return
end
diff --git a/apisix/plugins/http-logger.lua b/apisix/plugins/http-logger.lua
new file mode 100644
index 0000000000000..44df6aeff99cb
--- /dev/null
+++ b/apisix/plugins/http-logger.lua
@@ -0,0 +1,176 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local log_util = require("apisix.utils.log-util")
+local batch_processor = require("apisix.utils.batch-processor")
+local plugin_name = "http-logger"
+local ngx = ngx
+local tostring = tostring
+local http = require "resty.http"
+local url = require "net.url"
+local buffers = {}
+
+local schema = {
+ type = "object",
+ properties = {
+ uri = {type = "string"},
+ auth_header = {type = "string", default = ""},
+ timeout = {type = "integer", minimum = 1, default = 3},
+ name = {type = "string", default = "http logger"},
+ max_retry_count = {type = "integer", minimum = 0, default = 0},
+ retry_delay = {type = "integer", minimum = 0, default = 1},
+ buffer_duration = {type = "integer", minimum = 1, default = 60},
+ inactive_timeout = {type = "integer", minimum = 1, default = 5},
+ batch_max_size = {type = "integer", minimum = 1, default = 1000},
+ include_req_body = {type = "boolean", default = false}
+ },
+ required = {"uri"}
+}
+
+
+local _M = {
+ version = 0.1,
+ priority = 410,
+ name = plugin_name,
+ schema = schema,
+}
+
+
+function _M.check_schema(conf)
+ return core.schema.check(schema, conf)
+end
+
+
+local function send_http_data(conf, log_message)
+ local err_msg
+ local res = true
+ local url_decoded = url.parse(conf.uri)
+ local host = url_decoded.host
+ local port = url_decoded.port
+
+ if ((not port) and url_decoded.scheme == "https") then
+ port = 443
+ elseif not port then
+ port = 80
+ end
+
+ local httpc = http.new()
+ httpc:set_timeout(conf.timeout * 1000)
+ local ok, err = httpc:connect(host, port)
+
+ if not ok then
+ return false, "failed to connect to host[" .. host .. "] port["
+ .. tostring(port) .. "] " .. err
+ end
+
+ if url_decoded.scheme == "https" then
+ ok, err = httpc:ssl_handshake(true, host, false)
+ if not ok then
+ return nil, "failed to perform SSL with host[" .. host .. "] "
+ .. "port[" .. tostring(port) .. "] " .. err
+ end
+ end
+
+ local httpc_res, httpc_err = httpc:request({
+ method = "POST",
+ path = url_decoded.path,
+ query = url_decoded.query,
+ body = log_message,
+ headers = {
+ ["Host"] = url_decoded.host,
+ ["Content-Type"] = "application/json",
+ ["Authorization"] = conf.auth_header
+ }
+ })
+
+ if not httpc_res then
+ return false, "error while sending data to [" .. host .. "] port["
+ .. tostring(port) .. "] " .. httpc_err
+ end
+
+ -- some error occurred in the server
+ if httpc_res.status >= 400 then
+ res = false
+ err_msg = "server returned status code[" .. httpc_res.status .. "] host["
+ .. host .. "] port[" .. tostring(port) .. "] "
+ .. "body[" .. httpc_res:read_body() .. "]"
+ end
+
+ -- keep the connection alive
+ ok, err = httpc:set_keepalive(conf.keepalive)
+
+ if not ok then
+ core.log.debug("failed to keep the connection alive", err)
+ end
+
+ return res, err_msg
+end
+
+
+function _M.log(conf)
+ local entry = log_util.get_full_log(ngx, conf)
+
+ if not entry.route_id then
+ core.log.error("failed to obtain the route id for http logger")
+ return
+ end
+
+ local log_buffer = buffers[entry.route_id]
+
+ if log_buffer then
+ log_buffer:push(entry)
+ return
+ end
+
+ -- Generate a function to be executed by the batch processor
+ local func = function(entries, batch_max_size)
+ local data, err
+ if batch_max_size == 1 then
+ data, err = core.json.encode(entries[1]) -- encode as single {}
+ else
+ data, err = core.json.encode(entries) -- encode as array [{}]
+ end
+
+ if not data then
+ return false, 'error occurred while encoding the data: ' .. err
+ end
+
+ return send_http_data(conf, data)
+ end
+
+ local config = {
+ name = conf.name,
+ retry_delay = conf.retry_delay,
+ batch_max_size = conf.batch_max_size,
+ max_retry_count = conf.max_retry_count,
+ buffer_duration = conf.buffer_duration,
+ inactive_timeout = conf.inactive_timeout,
+ }
+
+ local err
+ log_buffer, err = batch_processor:new(func, config)
+
+ if not log_buffer then
+ core.log.error("error when creating the batch processor: ", err)
+ return
+ end
+
+ buffers[entry.route_id] = log_buffer
+ log_buffer:push(entry)
+end
+
+return _M
diff --git a/apisix/plugins/ip-restriction.lua b/apisix/plugins/ip-restriction.lua
index ab4deed3a0d86..f08c9c7ccbd01 100644
--- a/apisix/plugins/ip-restriction.lua
+++ b/apisix/plugins/ip-restriction.lua
@@ -110,7 +110,7 @@ function _M.check_schema(conf)
end
-local function create_ip_mather(ip_list)
+local function create_ip_matcher(ip_list)
local ip, err = ipmatcher.new(ip_list)
if not ip then
core.log.error("failed to create ip matcher: ", err,
@@ -128,7 +128,7 @@ function _M.access(conf, ctx)
if conf.blacklist and #conf.blacklist > 0 then
local matcher = lrucache(conf.blacklist, nil,
- create_ip_mather, conf.blacklist)
+ create_ip_matcher, conf.blacklist)
if matcher then
block = matcher:match(remote_addr)
end
@@ -136,7 +136,7 @@ function _M.access(conf, ctx)
if conf.whitelist and #conf.whitelist > 0 then
local matcher = lrucache(conf.whitelist, nil,
- create_ip_mather, conf.whitelist)
+ create_ip_matcher, conf.whitelist)
if matcher then
block = not matcher:match(remote_addr)
end
diff --git a/apisix/plugins/kafka-logger.lua b/apisix/plugins/kafka-logger.lua
index a9050b9d6080f..fc7d90cde719f 100644
--- a/apisix/plugins/kafka-logger.lua
+++ b/apisix/plugins/kafka-logger.lua
@@ -21,7 +21,11 @@ local batch_processor = require("apisix.utils.batch-processor")
local pairs = pairs
local type = type
local table = table
+local ipairs = ipairs
local plugin_name = "kafka-logger"
+local stale_timer_running = false;
+local timer_at = ngx.timer.at
+local tostring = tostring
local ngx = ngx
local buffers = {}
@@ -40,6 +44,7 @@ local schema = {
buffer_duration = {type = "integer", minimum = 1, default = 60},
inactive_timeout = {type = "integer", minimum = 1, default = 5},
batch_max_size = {type = "integer", minimum = 1, default = 1000},
+ include_req_body = {type = "boolean", default = false}
},
required = {"broker_list", "kafka_topic", "key"}
}
@@ -89,9 +94,25 @@ local function send_kafka_data(conf, log_message)
end
end
+-- remove stale objects from the memory after timer expires
+local function remove_stale_objects(premature)
+ if premature then
+ return
+ end
+
+ for key, batch in ipairs(buffers) do
+ if #batch.entry_buffer.entries == 0 and #batch.batch_to_process == 0 then
+ core.log.debug("removing batch processor stale object, route id:", tostring(key))
+ buffers[key] = nil
+ end
+ end
+
+ stale_timer_running = false
+end
+
function _M.log(conf)
- local entry = log_util.get_full_log(ngx)
+ local entry = log_util.get_full_log(ngx, conf)
if not entry.route_id then
core.log.error("failed to obtain the route id for kafka logger")
@@ -100,6 +121,12 @@ function _M.log(conf)
local log_buffer = buffers[entry.route_id]
+ if not stale_timer_running then
+ -- run the timer every 30 mins if any log is present
+ timer_at(1800, remove_stale_objects)
+ stale_timer_running = true
+ end
+
if log_buffer then
log_buffer:push(entry)
return
diff --git a/apisix/plugins/limit-conn.lua b/apisix/plugins/limit-conn.lua
index dbffbabb8277c..6ca46d5d1df7f 100644
--- a/apisix/plugins/limit-conn.lua
+++ b/apisix/plugins/limit-conn.lua
@@ -30,9 +30,9 @@ local schema = {
enum = {"remote_addr", "server_addr", "http_x_real_ip",
"http_x_forwarded_for"},
},
- rejected_code = {type = "integer", minimum = 200},
+ rejected_code = {type = "integer", minimum = 200, default = 503},
},
- required = {"conn", "burst", "default_conn_delay", "key", "rejected_code"}
+ required = {"conn", "burst", "default_conn_delay", "key"}
}
diff --git a/apisix/plugins/limit-count.lua b/apisix/plugins/limit-count.lua
index 42db2d54784b4..3e9d4af28adee 100644
--- a/apisix/plugins/limit-count.lua
+++ b/apisix/plugins/limit-count.lua
@@ -34,7 +34,8 @@ local schema = {
enum = {"remote_addr", "server_addr", "http_x_real_ip",
"http_x_forwarded_for"},
},
- rejected_code = {type = "integer", minimum = 200, maximum = 600},
+ rejected_code = {type = "integer", minimum = 200, maximum = 600,
+ default = 503},
policy = {
type = "string",
enum = {"local", "redis"},
@@ -53,7 +54,7 @@ local schema = {
},
},
additionalProperties = false,
- required = {"count", "time_window", "key", "rejected_code"},
+ required = {"count", "time_window", "key"},
}
diff --git a/apisix/plugins/limit-req.lua b/apisix/plugins/limit-req.lua
index e35c4b328e514..1caadce8b2f11 100644
--- a/apisix/plugins/limit-req.lua
+++ b/apisix/plugins/limit-req.lua
@@ -29,9 +29,9 @@ local schema = {
enum = {"remote_addr", "server_addr", "http_x_real_ip",
"http_x_forwarded_for"},
},
- rejected_code = {type = "integer", minimum = 200},
+ rejected_code = {type = "integer", minimum = 200, default = 503},
},
- required = {"rate", "burst", "key", "rejected_code"}
+ required = {"rate", "burst", "key"}
}
diff --git a/apisix/plugins/openid-connect.lua b/apisix/plugins/openid-connect.lua
index 6a93226f9baac..2572c856ca955 100644
--- a/apisix/plugins/openid-connect.lua
+++ b/apisix/plugins/openid-connect.lua
@@ -116,11 +116,12 @@ local function introspect(ctx, conf)
end
else
res, err = openidc.introspect(conf)
- if res then
+ if err then
+ return ngx.HTTP_UNAUTHORIZED, err
+ else
return res
end
end
-
if conf.bearer_only then
ngx.header["WWW-Authenticate"] = 'Bearer realm="' .. conf.realm
.. '",error="' .. err .. '"'
diff --git a/apisix/plugins/prometheus/exporter.lua b/apisix/plugins/prometheus/exporter.lua
index 538deaba3a496..2c0b6fc618abf 100644
--- a/apisix/plugins/prometheus/exporter.lua
+++ b/apisix/plugins/prometheus/exporter.lua
@@ -14,26 +14,48 @@
-- See the License for the specific language governing permissions and
-- limitations under the License.
--
-local base_prometheus = require("resty.prometheus")
+local base_prometheus = require("prometheus")
local core = require("apisix.core")
local ipairs = ipairs
local ngx = ngx
local ngx_capture = ngx.location.capture
local re_gmatch = ngx.re.gmatch
+local tonumber = tonumber
+local select = select
local prometheus
-- Default set of latency buckets, 1ms to 60s:
local DEFAULT_BUCKETS = { 1, 2, 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, 70,
80, 90, 100, 200, 300, 400, 500, 1000,
- 2000, 5000, 10000, 30000, 60000 }
+ 2000, 5000, 10000, 30000, 60000
+}
local metrics = {}
-local _M = {version = 0.3}
+ local inner_tab_arr = {}
+ local clear_tab = core.table.clear
+local function gen_arr(...)
+ clear_tab(inner_tab_arr)
+
+ for i = 1, select('#', ...) do
+ inner_tab_arr[i] = select(i, ...)
+ end
+
+ return inner_tab_arr
+end
+
+
+local _M = {}
function _M.init()
+ -- todo: support hot reload, we may need to update the lua-prometheus
+ -- library
+ if ngx.get_phase() ~= "init" and ngx.get_phase() ~= "init_worker" then
+ return
+ end
+
core.table.clear(metrics)
-- across all services
@@ -54,6 +76,10 @@ function _M.init()
"HTTP request latency per service in APISIX",
{"type", "service", "node"}, DEFAULT_BUCKETS)
+ metrics.overhead = prometheus:histogram("http_overhead",
+ "HTTP request overhead per service in APISIX",
+ {"type", "service", "node"}, DEFAULT_BUCKETS)
+
metrics.bandwidth = prometheus:counter("bandwidth",
"Total bandwidth in bytes consumed per service in APISIX",
{"type", "route", "service", "node"})
@@ -75,21 +101,31 @@ function _M.log(conf, ctx)
service_id = vars.host
end
- metrics.status:inc(1, vars.status, route_id, service_id, balancer_ip)
+ metrics.status:inc(1,
+ gen_arr(vars.status, route_id, service_id, balancer_ip))
local latency = (ngx.now() - ngx.req.start_time()) * 1000
- metrics.latency:observe(latency, "request", service_id, balancer_ip)
+ metrics.latency:observe(latency,
+ gen_arr("request", service_id, balancer_ip))
+
+ local overhead = latency
+ if ctx.var.upstream_response_time then
+ overhead = overhead - tonumber(ctx.var.upstream_response_time) * 1000
+ end
+ metrics.overhead:observe(overhead,
+ gen_arr("request", service_id, balancer_ip))
- metrics.bandwidth:inc(vars.request_length, "ingress", route_id, service_id,
- balancer_ip)
+ metrics.bandwidth:inc(vars.request_length,
+ gen_arr("ingress", route_id, service_id, balancer_ip))
- metrics.bandwidth:inc(vars.bytes_sent, "egress", route_id, service_id,
- balancer_ip)
+ metrics.bandwidth:inc(vars.bytes_sent,
+ gen_arr("egress", route_id, service_id, balancer_ip))
end
local ngx_statu_items = {"active", "accepted", "handled", "total",
"reading", "writing", "waiting"}
+ local label_values = {}
local function nginx_status()
local res = ngx_capture("/apisix/nginx_status")
if not res or res.status ~= 200 then
@@ -114,7 +150,8 @@ local function nginx_status()
break
end
- metrics.connections:set(val[0], name)
+ label_values[1] = name
+ metrics.connections:set(val[0], label_values)
end
end
diff --git a/apisix/plugins/redirect.lua b/apisix/plugins/redirect.lua
index 6cc28ac31307d..a9df21f26b5a0 100644
--- a/apisix/plugins/redirect.lua
+++ b/apisix/plugins/redirect.lua
@@ -30,8 +30,12 @@ local schema = {
properties = {
ret_code = {type = "integer", minimum = 200, default = 302},
uri = {type = "string", minLength = 2},
+ http_to_https = {type = "boolean"}, -- default is false
},
- required = {"uri"},
+ oneOf = {
+ {required = {"uri"}},
+ {required = {"http_to_https"}}
+ }
}
@@ -80,11 +84,13 @@ function _M.check_schema(conf)
return false, err
end
- local uri_segs, err = parse_uri(conf.uri)
- if not uri_segs then
- return false, err
+ if conf.uri then
+ local uri_segs, err = parse_uri(conf.uri)
+ if not uri_segs then
+ return false, err
+ end
+ core.log.info(core.json.delay_encode(uri_segs))
end
- core.log.info(core.json.delay_encode(uri_segs))
return true
end
@@ -120,15 +126,22 @@ end
function _M.rewrite(conf, ctx)
core.log.info("plugin rewrite phase, conf: ", core.json.delay_encode(conf))
- local new_uri, err = concat_new_uri(conf.uri, ctx)
- if not new_uri then
- core.log.error("failed to generate new uri by: ", conf.uri, " error: ",
- err)
- core.response.exit(500)
+ if conf.http_to_https and ctx.var.scheme == "http" then
+ conf.uri = "https://$host$request_uri"
+ conf.ret_code = 301
end
- core.response.set_header("Location", new_uri)
- core.response.exit(conf.ret_code)
+ if conf.uri and conf.ret_code then
+ local new_uri, err = concat_new_uri(conf.uri, ctx)
+ if not new_uri then
+ core.log.error("failed to generate new uri by: ", conf.uri, " error: ",
+ err)
+ core.response.exit(500)
+ end
+
+ core.response.set_header("Location", new_uri)
+ core.response.exit(conf.ret_code)
+ end
end
diff --git a/apisix/plugins/skywalking.lua b/apisix/plugins/skywalking.lua
new file mode 100644
index 0000000000000..f95286bd8d143
--- /dev/null
+++ b/apisix/plugins/skywalking.lua
@@ -0,0 +1,80 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local ngx = ngx
+local math = math
+
+local sw_client = require("apisix.plugins.skywalking.client")
+local sw_tracer = require("apisix.plugins.skywalking.tracer")
+
+local plugin_name = "skywalking"
+
+
+local schema = {
+ type = "object",
+ properties = {
+ endpoint = {type = "string"},
+ sample_ratio = {type = "number", minimum = 0.00001, maximum = 1, default = 1}
+ },
+ service_name = {
+ type = "string",
+ description = "service name for skywalking",
+ default = "APISIX",
+ },
+ required = {"endpoint"}
+}
+
+
+local _M = {
+ version = 0.1,
+ priority = -1100, -- last running plugin, but before serverless post func
+ name = plugin_name,
+ schema = schema,
+}
+
+
+function _M.check_schema(conf)
+ return core.schema.check(schema, conf)
+end
+
+
+function _M.rewrite(conf, ctx)
+ core.log.debug("rewrite phase of skywalking plugin")
+ ctx.skywalking_sample = false
+ if conf.sample_ratio == 1 or math.random() < conf.sample_ratio then
+ ctx.skywalking_sample = true
+ sw_client.heartbeat(conf)
+ -- Currently, we can not have the upstream real network address
+ sw_tracer.start(ctx, conf.endpoint, "upstream service")
+ end
+end
+
+
+function _M.body_filter(conf, ctx)
+ if ctx.skywalking_sample and ngx.arg[2] then
+ sw_tracer.finish(ctx)
+ end
+end
+
+
+function _M.log(conf, ctx)
+ if ctx.skywalking_sample then
+ sw_tracer.prepareForReport(ctx, conf.endpoint)
+ end
+end
+
+return _M
diff --git a/apisix/plugins/skywalking/client.lua b/apisix/plugins/skywalking/client.lua
new file mode 100644
index 0000000000000..f83a6e35bf803
--- /dev/null
+++ b/apisix/plugins/skywalking/client.lua
@@ -0,0 +1,232 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local http = require("resty.http")
+local cjson = require('cjson')
+local ngx = ngx
+local ipairs = ipairs
+
+local register = require("skywalking.register")
+
+local _M = {}
+
+local function register_service(conf)
+ local endpoint = conf.endpoint
+
+ local tracing_buffer = ngx.shared['skywalking-tracing-buffer']
+ local service_id = tracing_buffer:get(endpoint .. '_service_id')
+ if service_id then
+ return service_id
+ end
+
+ local service_name = conf.service_name
+ local service = register.newServiceRegister(service_name)
+
+ local httpc = http.new()
+ local res, err = httpc:request_uri(endpoint .. '/v2/service/register',
+ {
+ method = "POST",
+ body = core.json.encode(service),
+ headers = {
+ ["Content-Type"] = "application/json",
+ },
+ })
+ if not res then
+ core.log.error("skywalking service register failed, request uri: ",
+ endpoint .. '/v2/service/register', ", err: ", err)
+
+ elseif res.status == 200 then
+ core.log.debug("skywalking service register response: ", res.body)
+ local register_results = cjson.decode(res.body)
+
+ for _, result in ipairs(register_results) do
+ if result.key == service_name then
+ service_id = result.value
+ core.log.debug("skywalking service registered, service id:"
+ .. service_id)
+ end
+ end
+
+ else
+ core.log.error("skywalking service register failed, request uri:",
+ endpoint .. "/v2/service/register",
+ ", response code:", res.status)
+ end
+
+ if service_id then
+ tracing_buffer:set(endpoint .. '_service_id', service_id)
+ end
+
+ return service_id
+end
+
+local function register_service_instance(conf, service_id)
+ local endpoint = conf.endpoint
+
+ local tracing_buffer = ngx.shared['skywalking-tracing-buffer']
+ local instance_id = tracing_buffer:get(endpoint .. '_instance_id')
+ if instance_id then
+ return instance_id
+ end
+
+ local service_instance_name = core.id.get()
+ local service_instance = register.newServiceInstanceRegister(
+ service_id,
+ service_instance_name,
+ ngx.now() * 1000)
+
+ local httpc = http.new()
+ local res, err = httpc:request_uri(endpoint .. '/v2/instance/register',
+ {
+ method = "POST",
+ body = core.json.encode(service_instance),
+ headers = {
+ ["Content-Type"] = "application/json",
+ },
+ })
+
+ if not res then
+ core.log.error("skywalking service Instance register failed",
+ ", request uri: ", conf.endpoint .. '/v2/instance/register',
+ ", err: ", err)
+
+ elseif res.status == 200 then
+ core.log.debug("skywalking service instance register response: ", res.body)
+ local register_results = cjson.decode(res.body)
+
+ for _, result in ipairs(register_results) do
+ if result.key == service_instance_name then
+ instance_id = result.value
+ end
+ end
+
+ else
+ core.log.error("skywalking service instance register failed, ",
+ "response code:", res.status)
+ end
+
+ if instance_id then
+ tracing_buffer:set(endpoint .. '_instance_id', instance_id)
+ end
+
+ return instance_id
+end
+
+local function ping(endpoint)
+ local tracing_buffer = ngx.shared['skywalking-tracing-buffer']
+ local ping_pkg = register.newServiceInstancePingPkg(
+ tracing_buffer:get(endpoint .. '_instance_id'),
+ core.id.get(),
+ ngx.now() * 1000)
+
+ local httpc = http.new()
+ local res, err = httpc:request_uri(endpoint .. '/v2/instance/heartbeat', {
+ method = "POST",
+ body = core.json.encode(ping_pkg),
+ headers = {
+ ["Content-Type"] = "application/json",
+ },
+ })
+
+ if err then
+ core.log.error("skywalking agent ping failed, err: ", err)
+ else
+ core.log.debug(res.body)
+ end
+end
+
+-- report trace segments to the backend
+local function report_traces(endpoint)
+ local tracing_buffer = ngx.shared['skywalking-tracing-buffer']
+ local segment = tracing_buffer:rpop(endpoint .. '_segment')
+
+ local count = 0
+
+ local httpc = http.new()
+
+ while segment ~= nil do
+ local res, err = httpc:request_uri(endpoint .. '/v2/segments', {
+ method = "POST",
+ body = segment,
+ headers = {
+ ["Content-Type"] = "application/json",
+ },
+ })
+
+ if err == nil then
+ if res.status ~= 200 then
+ core.log.error("skywalking segment report failed, response code ", res.status)
+ break
+ else
+ count = count + 1
+ core.log.debug(res.body)
+ end
+ else
+ core.log.error("skywalking segment report failed, err: ", err)
+ break
+ end
+
+ segment = tracing_buffer:rpop('segment')
+ end
+
+ if count > 0 then
+ core.log.debug(count, " skywalking segments reported")
+ end
+end
+
+do
+ local heartbeat_timer
+
+function _M.heartbeat(conf)
+ local sw_heartbeat = function()
+ local service_id = register_service(conf)
+ if not service_id then
+ return
+ end
+
+ core.log.debug("skywalking service registered, ",
+ "service id: ", service_id)
+
+ local service_instance_id = register_service_instance(conf, service_id)
+ if not service_instance_id then
+ return
+ end
+
+ core.log.debug("skywalking service Instance registered, ",
+ "service instance id: ", service_instance_id)
+ report_traces(conf.endpoint)
+ ping(conf.endpoint)
+ end
+
+ local err
+ if ngx.worker.id() == 0 and not heartbeat_timer then
+ heartbeat_timer, err = core.timer.new("skywalking_heartbeat",
+ sw_heartbeat,
+ {check_interval = 3}
+ )
+ if not heartbeat_timer then
+ core.log.error("failed to create skywalking_heartbeat timer: ", err)
+ else
+ core.log.info("succeed to create timer: skywalking heartbeat")
+ end
+ end
+end
+
+end -- do
+
+
+return _M
diff --git a/apisix/plugins/skywalking/tracer.lua b/apisix/plugins/skywalking/tracer.lua
new file mode 100644
index 0000000000000..187b941edf46d
--- /dev/null
+++ b/apisix/plugins/skywalking/tracer.lua
@@ -0,0 +1,101 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local span = require("skywalking.span")
+local tracing_context = require("skywalking.tracing_context")
+local span_layer = require("skywalking.span_layer")
+local sw_segment = require('skywalking.segment')
+
+local pairs = pairs
+local ngx = ngx
+
+-- Constant pre-defined in SkyWalking main repo
+-- 84 represents Nginx
+local NGINX_COMPONENT_ID = 6000
+
+local _M = {}
+
+function _M.start(ctx, endpoint, upstream_name)
+ local context
+ -- TODO: use lrucache for better performance
+ local tracing_buffer = ngx.shared['skywalking-tracing-buffer']
+ local instance_id = tracing_buffer:get(endpoint .. '_instance_id')
+ local service_id = tracing_buffer:get(endpoint .. '_service_id')
+
+ if service_id and instance_id then
+ context = tracing_context.new(service_id, instance_id)
+ else
+ context = tracing_context.newNoOP()
+ end
+
+ local context_carrier = {}
+ context_carrier["sw6"] = ngx.req.get_headers()["sw6"]
+ local entry_span = tracing_context.createEntrySpan(context, ctx.var.uri, nil, context_carrier)
+ span.start(entry_span, ngx.now() * 1000)
+ span.setComponentId(entry_span, NGINX_COMPONENT_ID)
+ span.setLayer(entry_span, span_layer.HTTP)
+
+ span.tag(entry_span, 'http.method', ngx.req.get_method())
+ span.tag(entry_span, 'http.params', ctx.var.scheme .. '://'
+ .. ctx.var.host .. ctx.var.request_uri)
+
+ context_carrier = {}
+ local exit_span = tracing_context.createExitSpan(context,
+ ctx.var.upstream_uri,
+ entry_span,
+ upstream_name,
+ context_carrier)
+ span.start(exit_span, ngx.now() * 1000)
+ span.setComponentId(exit_span, NGINX_COMPONENT_ID)
+ span.setLayer(exit_span, span_layer.HTTP)
+
+ for name, value in pairs(context_carrier) do
+ ngx.req.set_header(name, value)
+ end
+
+ -- Push the data in the context
+ ctx.sw_tracing_context = context
+ ctx.sw_entry_span = entry_span
+ ctx.sw_exit_span = exit_span
+
+ core.log.debug("push data into skywalking context")
+end
+
+function _M.finish(ctx)
+ -- Finish the exit span when received the first response package from upstream
+ if ctx.sw_exit_span then
+ span.finish(ctx.sw_exit_span, ngx.now() * 1000)
+ ctx.sw_exit_span = nil
+ end
+end
+
+function _M.prepareForReport(ctx, endpoint)
+ if ctx.sw_entry_span then
+ span.finish(ctx.sw_entry_span, ngx.now() * 1000)
+ local status, segment = tracing_context.drainAfterFinished(ctx.sw_tracing_context)
+ if status then
+ local segment_json = core.json.encode(sw_segment.transform(segment))
+ core.log.debug('segment = ', segment_json)
+
+ local tracing_buffer = ngx.shared['skywalking-tracing-buffer']
+ local length = tracing_buffer:lpush(endpoint .. '_segment', segment_json)
+ core.log.debug('segment buffer size = ', length)
+ end
+ end
+end
+
+return _M
diff --git a/apisix/plugins/syslog.lua b/apisix/plugins/syslog.lua
new file mode 100644
index 0000000000000..7b96a2e010b67
--- /dev/null
+++ b/apisix/plugins/syslog.lua
@@ -0,0 +1,189 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local log_util = require("apisix.utils.log-util")
+local batch_processor = require("apisix.utils.batch-processor")
+local logger_socket = require("resty.logger.socket")
+local plugin_name = "syslog"
+local ngx = ngx
+local buffers = {}
+local ipairs = ipairs
+local stale_timer_running = false;
+local timer_at = ngx.timer.at
+local tostring = tostring
+
+local schema = {
+ type = "object",
+ properties = {
+ host = {type = "string"},
+ port = {type = "integer"},
+ name = {type = "string", default = "sys logger"},
+ flush_limit = {type = "integer", minimum = 1, default = 4096},
+ drop_limit = {type = "integer", default = 1048576},
+ timeout = {type = "integer", minimum = 1, default = 3},
+ sock_type = {type = "string", default = "tcp"},
+ max_retry_times = {type = "integer", minimum = 1, default = 1},
+ retry_interval = {type = "integer", minimum = 0, default = 1},
+ pool_size = {type = "integer", minimum = 5, default = 5},
+ tls = {type = "boolean", default = false},
+ batch_max_size = {type = "integer", minimum = 1, default = 1000},
+ buffer_duration = {type = "integer", minimum = 1, default = 60},
+ include_req_body = {type = "boolean", default = false}
+ },
+ required = {"host", "port"}
+}
+
+local lrucache = core.lrucache.new({
+ ttl = 300, count = 512
+})
+
+local _M = {
+ version = 0.1,
+ priority = 401,
+ name = plugin_name,
+ schema = schema,
+}
+
+function _M.check_schema(conf)
+ return core.schema.check(schema, conf)
+end
+
+function _M.flush_syslog(logger)
+ local ok, err = logger:flush(logger)
+ if not ok then
+ core.log.error("failed to flush message:", err)
+ end
+end
+
+local function send_syslog_data(conf, log_message)
+ local err_msg
+ local res = true
+
+ -- fetch api_ctx
+ local api_ctx = ngx.ctx.api_ctx
+ if not api_ctx then
+ core.log.error("invalid api_ctx cannot proceed with sys logger plugin")
+ return core.response.exit(500)
+ end
+
+ -- fetch it from lrucache
+ local logger, err = lrucache(api_ctx.conf_type .. "#" .. api_ctx.conf_id, api_ctx.conf_version,
+ logger_socket.new, logger_socket, {
+ host = conf.host,
+ port = conf.port,
+ flush_limit = conf.flush_limit,
+ drop_limit = conf.drop_limit,
+ timeout = conf.timeout,
+ sock_type = conf.sock_type,
+ max_retry_times = conf.max_retry_times,
+ retry_interval = conf.retry_interval,
+ pool_size = conf.pool_size,
+ tls = conf.tls,
+ })
+
+ if not logger then
+ res = false
+ err_msg = "failed when initiating the sys logger processor".. err
+ end
+
+ -- reuse the logger object
+ local ok, err = logger:log(core.json.encode(log_message))
+ if not ok then
+ res = false
+ err_msg = "failed to log message" .. err
+ end
+
+ return res, err_msg
+end
+
+-- remove stale objects from the memory after timer expires
+local function remove_stale_objects(premature)
+ if premature then
+ return
+ end
+
+ for key, batch in ipairs(buffers) do
+ if #batch.entry_buffer.entries == 0 and #batch.batch_to_process == 0 then
+ core.log.debug("removing batch processor stale object, route id:", tostring(key))
+ buffers[key] = nil
+ end
+ end
+
+ stale_timer_running = false
+end
+
+-- log phase in APISIX
+function _M.log(conf)
+ local entry = log_util.get_full_log(ngx, conf)
+
+ if not entry.route_id then
+ core.log.error("failed to obtain the route id for sys logger")
+ return
+ end
+
+ local log_buffer = buffers[entry.route_id]
+
+ if not stale_timer_running then
+ -- run the timer every 30 mins if any log is present
+ timer_at(1800, remove_stale_objects)
+ stale_timer_running = true
+ end
+
+ if log_buffer then
+ log_buffer:push(entry)
+ return
+ end
+
+ -- Generate a function to be executed by the batch processor
+ local func = function(entries, batch_max_size)
+ local data, err
+ if batch_max_size == 1 then
+ data, err = core.json.encode(entries[1]) -- encode as single {}
+ else
+ data, err = core.json.encode(entries) -- encode as array [{}]
+ end
+
+ if not data then
+ return false, 'error occurred while encoding the data: ' .. err
+ end
+
+ return send_syslog_data(conf, data)
+ end
+
+ local config = {
+ name = conf.name,
+ retry_delay = conf.retry_interval,
+ batch_max_size = conf.batch_max_size,
+ max_retry_count = conf.max_retry_times,
+ buffer_duration = conf.buffer_duration,
+ inactive_timeout = conf.timeout,
+ }
+
+ local err
+ log_buffer, err = batch_processor:new(func, config)
+
+ if not log_buffer then
+ core.log.error("error when creating the batch processor: ", err)
+ return
+ end
+
+ buffers[entry.route_id] = log_buffer
+ log_buffer:push(entry)
+
+end
+
+return _M
diff --git a/apisix/plugins/tcp-logger.lua b/apisix/plugins/tcp-logger.lua
index 9eeef3320b778..ced5f8f23dad3 100644
--- a/apisix/plugins/tcp-logger.lua
+++ b/apisix/plugins/tcp-logger.lua
@@ -22,6 +22,9 @@ local tostring = tostring
local buffers = {}
local ngx = ngx
local tcp = ngx.socket.tcp
+local ipairs = ipairs
+local stale_timer_running = false;
+local timer_at = ngx.timer.at
local schema = {
type = "object",
@@ -37,6 +40,7 @@ local schema = {
buffer_duration = {type = "integer", minimum = 1, default = 60},
inactive_timeout = {type = "integer", minimum = 1, default = 5},
batch_max_size = {type = "integer", minimum = 1, default = 1000},
+ include_req_body = {type = "boolean", default = false}
},
required = {"host", "port"}
}
@@ -94,9 +98,25 @@ local function send_tcp_data(conf, log_message)
return res, err_msg
end
+-- remove stale objects from the memory after timer expires
+local function remove_stale_objects(premature)
+ if premature then
+ return
+ end
+
+ for key, batch in ipairs(buffers) do
+ if #batch.entry_buffer.entries == 0 and #batch.batch_to_process == 0 then
+ core.log.debug("removing batch processor stale object, route id:", tostring(key))
+ buffers[key] = nil
+ end
+ end
+
+ stale_timer_running = false
+end
+
function _M.log(conf)
- local entry = log_util.get_full_log(ngx)
+ local entry = log_util.get_full_log(ngx, conf)
if not entry.route_id then
core.log.error("failed to obtain the route id for tcp logger")
@@ -105,6 +125,12 @@ function _M.log(conf)
local log_buffer = buffers[entry.route_id]
+ if not stale_timer_running then
+ -- run the timer every 30 mins if any log is present
+ timer_at(1800, remove_stale_objects)
+ stale_timer_running = true
+ end
+
if log_buffer then
log_buffer:push(entry)
return
diff --git a/apisix/plugins/udp-logger.lua b/apisix/plugins/udp-logger.lua
index b1b565fb1b2d0..cec782a347624 100644
--- a/apisix/plugins/udp-logger.lua
+++ b/apisix/plugins/udp-logger.lua
@@ -22,6 +22,9 @@ local tostring = tostring
local buffers = {}
local ngx = ngx
local udp = ngx.socket.udp
+local ipairs = ipairs
+local stale_timer_running = false;
+local timer_at = ngx.timer.at
local schema = {
type = "object",
@@ -33,6 +36,7 @@ local schema = {
buffer_duration = {type = "integer", minimum = 1, default = 60},
inactive_timeout = {type = "integer", minimum = 1, default = 5},
batch_max_size = {type = "integer", minimum = 1, default = 1000},
+ include_req_body = {type = "boolean", default = false}
},
required = {"host", "port"}
}
@@ -77,9 +81,25 @@ local function send_udp_data(conf, log_message)
return res, err_msg
end
+-- remove stale objects from the memory after timer expires
+local function remove_stale_objects(premature)
+ if premature then
+ return
+ end
+
+ for key, batch in ipairs(buffers) do
+ if #batch.entry_buffer.entries == 0 and #batch.batch_to_process == 0 then
+ core.log.debug("removing batch processor stale object, route id:", tostring(key))
+ buffers[key] = nil
+ end
+ end
+
+ stale_timer_running = false
+end
+
function _M.log(conf)
- local entry = log_util.get_full_log(ngx)
+ local entry = log_util.get_full_log(ngx, conf)
if not entry.route_id then
core.log.error("failed to obtain the route id for udp logger")
@@ -88,6 +108,12 @@ function _M.log(conf)
local log_buffer = buffers[entry.route_id]
+ if not stale_timer_running then
+ -- run the timer every 30 mins if any log is present
+ timer_at(1800, remove_stale_objects)
+ stale_timer_running = true
+ end
+
if log_buffer then
log_buffer:push(entry)
return
diff --git a/apisix/plugins/uri-blocker.lua b/apisix/plugins/uri-blocker.lua
new file mode 100644
index 0000000000000..ab5b6828e7724
--- /dev/null
+++ b/apisix/plugins/uri-blocker.lua
@@ -0,0 +1,86 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+local core = require("apisix.core")
+local re_compile = require("resty.core.regex").re_match_compile
+local re_find = ngx.re.find
+local ipairs = ipairs
+
+local schema = {
+ type = "object",
+ properties = {
+ block_rules = {
+ type = "array",
+ items = {
+ type = "string",
+ minLength = 1,
+ maxLength = 4096,
+ },
+ uniqueItems = true
+ },
+ rejected_code = {
+ type = "integer",
+ minimum = 200,
+ default = 403
+ },
+ },
+ required = {"block_rules"},
+}
+
+
+local plugin_name = "uri-blocker"
+
+local _M = {
+ version = 0.1,
+ priority = 2900,
+ name = plugin_name,
+ schema = schema,
+}
+
+
+function _M.check_schema(conf)
+ local ok, err = core.schema.check(schema, conf)
+ if not ok then
+ return false, err
+ end
+
+ local block_rules = {}
+ for i, re_rule in ipairs(conf.block_rules) do
+ local ok, err = re_compile(re_rule, "j")
+ -- core.log.warn("ok: ", tostring(ok), " err: ", tostring(err), " re_rule: ", re_rule)
+ if not ok then
+ return false, err
+ end
+ block_rules[i] = re_rule
+ end
+
+ conf.block_rules_concat = core.table.concat(block_rules, "|")
+ core.log.info("concat block_rules: ", conf.block_rules_concat)
+ return true
+end
+
+
+function _M.rewrite(conf, ctx)
+ core.log.info("uri: ", ctx.var.request_uri)
+ core.log.info("block uri rules: ", conf.block_rules_concat)
+ local from = re_find(ctx.var.request_uri, conf.block_rules_concat, "jo")
+ if from then
+ core.response.exit(conf.rejected_code)
+ end
+end
+
+
+return _M
diff --git a/apisix/plugins/zipkin.lua b/apisix/plugins/zipkin.lua
index 56412390e3794..934d88398ac15 100644
--- a/apisix/plugins/zipkin.lua
+++ b/apisix/plugins/zipkin.lua
@@ -48,7 +48,7 @@ local schema = {
local _M = {
version = 0.1,
- priority = -1000, -- last running plugin, but before serverless post func
+ priority = -1000,
name = plugin_name,
schema = schema,
}
diff --git a/apisix/router.lua b/apisix/router.lua
index d3b45941d9950..4ba8709937171 100644
--- a/apisix/router.lua
+++ b/apisix/router.lua
@@ -15,12 +15,13 @@
-- limitations under the License.
--
local require = require
-local core = require("apisix.core")
-local error = error
-local pairs = pairs
+local core = require("apisix.core")
+local error = error
+local pairs = pairs
+local ipairs = ipairs
-local _M = {version = 0.2}
+local _M = {version = 0.3}
local function filter(route)
@@ -29,17 +30,36 @@ local function filter(route)
return
end
- if not route.value.upstream then
+ if not route.value.upstream or not route.value.upstream.nodes then
return
end
- for addr, _ in pairs(route.value.upstream.nodes or {}) do
- local host = core.utils.parse_addr(addr)
- if not core.utils.parse_ipv4(host) and
- not core.utils.parse_ipv6(host) then
- route.has_domain = true
- break
+ local nodes = route.value.upstream.nodes
+ if core.table.isarray(nodes) then
+ for _, node in ipairs(nodes) do
+ local host = node.host
+ if not core.utils.parse_ipv4(host) and
+ not core.utils.parse_ipv6(host) then
+ route.has_domain = true
+ break
+ end
end
+ else
+ local new_nodes = core.table.new(core.table.nkeys(nodes), 0)
+ for addr, weight in pairs(nodes) do
+ local host, port = core.utils.parse_addr(addr)
+ if not core.utils.parse_ipv4(host) and
+ not core.utils.parse_ipv6(host) then
+ route.has_domain = true
+ end
+ local node = {
+ host = host,
+ port = port,
+ weight = weight,
+ }
+ core.table.insert(new_nodes, node)
+ end
+ route.value.upstream.nodes = new_nodes
end
core.log.info("filter route: ", core.json.delay_encode(route))
@@ -78,7 +98,7 @@ end
function _M.stream_init_worker()
local router_stream = require("apisix.stream.router.ip_port")
- router_stream.stream_init_worker()
+ router_stream.stream_init_worker(filter)
_M.router_stream = router_stream
end
@@ -88,4 +108,8 @@ function _M.http_routes()
end
+-- for test
+_M.filter_test = filter
+
+
return _M
diff --git a/apisix/schema_def.lua b/apisix/schema_def.lua
index e261c98c1ec40..d580be6982ed3 100644
--- a/apisix/schema_def.lua
+++ b/apisix/schema_def.lua
@@ -18,7 +18,7 @@ local schema = require('apisix.core.schema')
local setmetatable = setmetatable
local error = error
-local _M = {version = 0.4}
+local _M = {version = 0.5}
local plugins_schema = {
@@ -225,11 +225,9 @@ local health_checker = {
}
-local upstream_schema = {
- type = "object",
- properties = {
- nodes = {
- description = "nodes of upstream",
+local nodes_schema = {
+ anyOf = {
+ {
type = "object",
patternProperties = {
[".*"] = {
@@ -240,6 +238,39 @@ local upstream_schema = {
},
minProperties = 1,
},
+ {
+ type = "array",
+ minItems = 1,
+ items = {
+ type = "object",
+ properties = {
+ host = host_def,
+ port = {
+ description = "port of node",
+ type = "integer",
+ minimum = 1,
+ },
+ weight = {
+ description = "weight of node",
+ type = "integer",
+ minimum = 0,
+ },
+ metadata = {
+ description = "metadata of node",
+ type = "object",
+ }
+ },
+ required = {"host", "port", "weight"},
+ },
+ }
+ }
+}
+
+
+local upstream_schema = {
+ type = "object",
+ properties = {
+ nodes = nodes_schema,
retries = {
type = "integer",
minimum = 1,
@@ -296,12 +327,15 @@ local upstream_schema = {
description = "enable websocket for request",
type = "boolean"
},
+ name = {type = "string", maxLength = 50},
desc = {type = "string", maxLength = 256},
+ service_name = {type = "string", maxLength = 50},
id = id_schema
},
anyOf = {
{required = {"type", "nodes"}},
{required = {"type", "k8s_deployment_info"}},
+ {required = {"type", "service_name"}},
},
additionalProperties = false,
}
@@ -336,6 +370,7 @@ _M.route = {
},
uniqueItems = true,
},
+ name = {type = "string", maxLength = 50},
desc = {type = "string", maxLength = 256},
priority = {type = "integer", default = 0},
@@ -413,6 +448,7 @@ _M.service = {
plugins = plugins_schema,
upstream = upstream_schema,
upstream_id = id_schema,
+ name = {type = "string", maxLength = 50},
desc = {type = "string", maxLength = 256},
},
anyOf = {
@@ -445,6 +481,7 @@ _M.upstream = upstream_schema
_M.ssl = {
type = "object",
properties = {
+ id = id_schema,
cert = {
type = "string", minLength = 128, maxLength = 64*1024
},
@@ -454,13 +491,34 @@ _M.ssl = {
sni = {
type = "string",
pattern = [[^\*?[0-9a-zA-Z-.]+$]],
+ },
+ snis = {
+ type = "array",
+ items = {
+ type = "string",
+ pattern = [[^\*?[0-9a-zA-Z-.]+$]],
+ }
+ },
+ exptime = {
+ type = "integer",
+ minimum = 1588262400, -- 2020/5/1 0:0:0
+ },
+ status = {
+ description = "ssl status, 1 to enable, 0 to disable",
+ type = "integer",
+ enum = {1, 0},
+ default = 1
}
},
- required = {"sni", "key", "cert"},
+ oneOf = {
+ {required = {"sni", "key", "cert"}},
+ {required = {"snis", "key", "cert"}}
+ },
additionalProperties = false,
}
+
_M.proto = {
type = "object",
properties = {
@@ -476,6 +534,7 @@ _M.proto = {
_M.global_rule = {
type = "object",
properties = {
+ id = id_schema,
plugins = plugins_schema
},
required = {"plugins"},
@@ -486,6 +545,7 @@ _M.global_rule = {
_M.stream_route = {
type = "object",
properties = {
+ id = id_schema,
remote_addr = remote_addr_def,
server_addr = {
description = "server IP",
diff --git a/apisix/stream/plugins/mqtt-proxy.lua b/apisix/stream/plugins/mqtt-proxy.lua
index f8d3552c66c42..b5334306b1697 100644
--- a/apisix/stream/plugins/mqtt-proxy.lua
+++ b/apisix/stream/plugins/mqtt-proxy.lua
@@ -15,8 +15,8 @@
-- limitations under the License.
--
local core = require("apisix.core")
-local balancer = require("ngx.balancer")
-local bit = require "bit"
+local upstream = require("apisix.upstream")
+local bit = require("bit")
local ngx = ngx
local ngx_exit = ngx.exit
local str_byte = string.byte
@@ -158,25 +158,28 @@ function _M.preread(conf, ctx)
end
core.log.info("mqtt client id: ", res.client_id)
-end
+ local up_conf = {
+ type = "roundrobin",
+ nodes = {
+ {host = conf.upstream.ip, port = conf.upstream.port, weight = 1},
+ }
+ }
-function _M.log(conf, ctx)
- core.log.info("plugin log phase, conf: ", core.json.encode(conf))
-end
+ local ok, err = upstream.check_schema(up_conf)
+ if not ok then
+ return 500, err
+ end
+ local matched_route = ctx.matched_route
+ upstream.set(ctx, up_conf.type .. "#route_" .. matched_route.value.id,
+ ctx.conf_version, up_conf, matched_route)
+ return
+end
-function _M.balancer(conf, ctx)
- core.log.info("plugin balancer phase, conf: ", core.json.encode(conf))
- -- ctx.balancer_name = plugin_name
- local up = conf.upstream
- ctx.balancer_name = plugin_name
- local ok, err = balancer.set_current_peer(up.ip, up.port)
- if not ok then
- core.log.error("failed to set server peer: ", err)
- return ngx_exit(1)
- end
+function _M.log(conf, ctx)
+ core.log.info("plugin log phase, conf: ", core.json.encode(conf))
end
diff --git a/apisix/upstream.lua b/apisix/upstream.lua
new file mode 100644
index 0000000000000..203b713d01070
--- /dev/null
+++ b/apisix/upstream.lua
@@ -0,0 +1,154 @@
+--
+-- Licensed to the Apache Software Foundation (ASF) under one or more
+-- contributor license agreements. See the NOTICE file distributed with
+-- this work for additional information regarding copyright ownership.
+-- The ASF licenses this file to You under the Apache License, Version 2.0
+-- (the "License"); you may not use this file except in compliance with
+-- the License. You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+--
+
+local core = require("apisix.core")
+local error = error
+local tostring = tostring
+local ipairs = ipairs
+local pairs = pairs
+local upstreams
+
+
+local _M = {}
+
+
+local function set_directly(ctx, key, ver, conf, parent)
+ if not ctx then
+ error("missing argument ctx", 2)
+ end
+ if not key then
+ error("missing argument key", 2)
+ end
+ if not ver then
+ error("missing argument ver", 2)
+ end
+ if not conf then
+ error("missing argument conf", 2)
+ end
+ if not parent then
+ error("missing argument parent", 2)
+ end
+
+ ctx.upstream_conf = conf
+ ctx.upstream_version = ver
+ ctx.upstream_key = key
+ ctx.upstream_healthcheck_parent = parent
+ return
+end
+_M.set = set_directly
+
+
+function _M.set_by_route(route, api_ctx)
+ if api_ctx.upstream_conf then
+ return true
+ end
+
+ local up_id = route.value.upstream_id
+ if up_id then
+ if not upstreams then
+ return false, "need to create a etcd instance for fetching "
+ .. "upstream information"
+ end
+
+ local up_obj = upstreams:get(tostring(up_id))
+ if not up_obj then
+ return false, "failed to find upstream by id: " .. up_id
+ end
+ core.log.info("upstream: ", core.json.delay_encode(up_obj))
+
+ local up_conf = up_obj.dns_value or up_obj.value
+ set_directly(api_ctx, up_conf.type .. "#upstream_" .. up_id,
+ up_obj.modifiedIndex, up_conf, up_obj)
+ return true
+ end
+
+ local up_conf = (route.dns_value and route.dns_value.upstream)
+ or route.value.upstream
+ if not up_conf then
+ return false, "missing upstream configuration in Route or Service"
+ end
+
+ set_directly(api_ctx, up_conf.type .. "#route_" .. route.value.id,
+ api_ctx.conf_version, up_conf, route)
+ return true
+end
+
+
+function _M.upstreams()
+ if not upstreams then
+ return nil, nil
+ end
+
+ return upstreams.values, upstreams.conf_version
+end
+
+
+function _M.check_schema(conf)
+ return core.schema.check(core.schema.upstream, conf)
+end
+
+
+function _M.init_worker()
+ local err
+ upstreams, err = core.config.new("/upstreams", {
+ automatic = true,
+ item_schema = core.schema.upstream,
+ filter = function(upstream)
+ upstream.has_domain = false
+ if not upstream.value or not upstream.value.nodes then
+ return
+ end
+
+ local nodes = upstream.value.nodes
+ if core.table.isarray(nodes) then
+ for _, node in ipairs(nodes) do
+ local host = node.host
+ if not core.utils.parse_ipv4(host) and
+ not core.utils.parse_ipv6(host) then
+ upstream.has_domain = true
+ break
+ end
+ end
+ else
+ local new_nodes = core.table.new(core.table.nkeys(nodes), 0)
+ for addr, weight in pairs(nodes) do
+ local host, port = core.utils.parse_addr(addr)
+ if not core.utils.parse_ipv4(host) and
+ not core.utils.parse_ipv6(host) then
+ upstream.has_domain = true
+ end
+ local node = {
+ host = host,
+ port = port,
+ weight = weight,
+ }
+ core.table.insert(new_nodes, node)
+ end
+ upstream.value.nodes = new_nodes
+ end
+
+ core.log.info("filter upstream: ", core.json.delay_encode(upstream))
+ end,
+ })
+ if not upstreams then
+ error("failed to create etcd instance for fetching upstream: " .. err)
+ return
+ end
+end
+
+
+return _M
diff --git a/apisix/utils/log-util.lua b/apisix/utils/log-util.lua
index 6ee03b288db6f..b11a435808f88 100644
--- a/apisix/utils/log-util.lua
+++ b/apisix/utils/log-util.lua
@@ -18,7 +18,7 @@ local core = require("apisix.core")
local _M = {}
-local function get_full_log(ngx)
+local function get_full_log(ngx, conf)
local ctx = ngx.ctx.api_ctx
local var = ctx.var
local service_id
@@ -34,7 +34,7 @@ local function get_full_log(ngx)
service_id = var.host
end
- return {
+ local log = {
request = {
url = url,
uri = var.request_uri,
@@ -56,6 +56,20 @@ local function get_full_log(ngx)
start_time = ngx.req.start_time() * 1000,
latency = (ngx.now() - ngx.req.start_time()) * 1000
}
+
+ if conf.include_req_body then
+ local body = ngx.req.get_body_data()
+ if body then
+ log.request.body = body
+ else
+ local body_file = ngx.req.get_body_file()
+ if body_file then
+ log.request.body_file = body_file
+ end
+ end
+ end
+
+ return log
end
_M.get_full_log = get_full_log
diff --git a/benchmark/fake-apisix/conf/nginx.conf b/benchmark/fake-apisix/conf/nginx.conf
index 327169adf189b..8666c29979772 100644
--- a/benchmark/fake-apisix/conf/nginx.conf
+++ b/benchmark/fake-apisix/conf/nginx.conf
@@ -24,7 +24,6 @@ pid logs/nginx.pid;
worker_rlimit_nofile 20480;
events {
- accept_mutex off;
worker_connections 10620;
}
@@ -33,6 +32,9 @@ worker_shutdown_timeout 3;
http {
lua_package_path "$prefix/lua/?.lua;;";
+ log_format main '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time';
+ access_log logs/access.log main buffer=16384 flush=5;
+
init_by_lua_block {
require "resty.core"
apisix = require("apisix")
@@ -60,8 +62,6 @@ http {
listen 9080;
- access_log off;
-
server_tokens off;
more_set_headers 'Server: APISIX web server';
@@ -106,6 +106,10 @@ http {
apisix.http_header_filter_phase()
}
+ body_filter_by_lua_block {
+ apisix.http_body_filter_phase()
+ }
+
log_by_lua_block {
apisix.http_log_phase()
}
diff --git a/benchmark/fake-apisix/lua/apisix.lua b/benchmark/fake-apisix/lua/apisix.lua
index 30671f7fa41b6..ea5bf15bf1117 100644
--- a/benchmark/fake-apisix/lua/apisix.lua
+++ b/benchmark/fake-apisix/lua/apisix.lua
@@ -25,7 +25,7 @@ end
local function fake_fetch()
ngx.ctx.ip = "127.0.0.1"
- ngx.ctx.port = 80
+ ngx.ctx.port = 1980
end
function _M.http_access_phase()
@@ -42,6 +42,12 @@ function _M.http_header_filter_phase()
end
end
+function _M.http_body_filter_phase()
+ if ngx.ctx then
+ -- do something
+ end
+end
+
function _M.http_log_phase()
if ngx.ctx then
-- do something
diff --git a/benchmark/run.sh b/benchmark/run.sh
index ff068d64f57b1..c41b554d2ce5e 100755
--- a/benchmark/run.sh
+++ b/benchmark/run.sh
@@ -36,7 +36,12 @@ function onCtrlC () {
sudo openresty -p $PWD/benchmark/server -s stop || exit 1
}
-sed -i "s/worker_processes [0-9]*/worker_processes $worker_cnt/g" conf/nginx.conf
+if [[ "$(uname)" == "Darwin" ]]; then
+ sed -i "" "s/worker_processes .*/worker_processes $worker_cnt;/g" conf/nginx.conf
+else
+ sed -i "s/worker_processes .*/worker_processes $worker_cnt;/g" conf/nginx.conf
+fi
+
make run
sleep 3
diff --git a/bin/apisix b/bin/apisix
index 1659de218ddce..4d7857ada7227 100755
--- a/bin/apisix
+++ b/bin/apisix
@@ -21,6 +21,8 @@ local function trim(s)
return (s:gsub("^%s*(.-)%s*$", "%1"))
end
+-- Note: The `excute_cmd` return value will have a line break at the end,
+-- it is recommended to use the `trim` function to handle the return value.
local function excute_cmd(cmd)
local t, err = io.popen(cmd)
if not t then
@@ -103,7 +105,6 @@ events {
}
worker_rlimit_core {* worker_rlimit_core *};
-working_directory /tmp/apisix_cores/;
worker_shutdown_timeout 3;
@@ -179,6 +180,7 @@ http {
lua_shared_dict upstream-healthcheck 10m;
lua_shared_dict worker-events 10m;
lua_shared_dict lrucache-lock 10m;
+ lua_shared_dict skywalking-tracing-buffer 100m;
# for openid-connect plugin
lua_shared_dict discovery 1m; # cache for discovery metadata documents
@@ -227,7 +229,7 @@ http {
log_format main '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time';
- access_log {* http.access_log *} main buffer=32768 flush=3;
+ access_log {* http.access_log *} main buffer=16384 flush=3;
open_file_cache max=1000 inactive=60;
client_max_body_size 0;
keepalive_timeout {* http.keepalive_timeout *};
@@ -239,6 +241,7 @@ http {
more_set_headers 'Server: APISIX web server';
include mime.types;
+ charset utf-8;
{% if real_ip_header then %}
real_ip_header {* real_ip_header *};
@@ -284,7 +287,18 @@ http {
{% if enable_admin and port_admin then %}
server {
+ {%if https_admin then%}
+ listen {* port_admin *} ssl;
+ ssl_certificate cert/apisix_admin_ssl.crt;
+ ssl_certificate_key cert/apisix_admin_ssl.key;
+ ssl_session_cache shared:SSL:1m;
+
+ ssl_protocols {* ssl.ssl_protocols *};
+ ssl_ciphers {* ssl.ssl_ciphers *};
+ ssl_prefer_server_ciphers on;
+ {% else %}
listen {* port_admin *};
+ {%end%}
log_not_found off;
location /apisix/admin {
{%if allow_admin then%}
@@ -309,7 +323,7 @@ http {
alias dashboard/;
- try_files $uri $uri/index.html /index.html;
+ try_files $uri $uri/index.html /index.html =404;
}
location /robots.txt {
@@ -379,7 +393,7 @@ http {
alias dashboard/;
- try_files $uri $uri/index.html /index.html;
+ try_files $uri $uri/index.html /index.html =404;
}
{% end %}
@@ -663,7 +677,7 @@ local function init()
local sys_conf = {
lua_path = pkg_path_org,
lua_cpath = pkg_cpath_org,
- os_name = excute_cmd("uname"),
+ os_name = trim(excute_cmd("uname")),
apisix_lua_home = apisix_home,
with_module_status = with_module_status,
error_log = {level = "warn"},
@@ -699,6 +713,7 @@ local function init()
if(sys_conf["enable_dev_mode"] == true) then
sys_conf["worker_processes"] = 1
+ sys_conf["enable_reuseport"] = false
else
sys_conf["worker_processes"] = "auto"
end
@@ -768,6 +783,18 @@ local function init_etcd(show_output)
local host_count = #(yaml_conf.etcd.host)
+ -- check whether the user has enabled etcd v2 protocol
+ for index, host in ipairs(yaml_conf.etcd.host) do
+ uri = host .. "/v2/keys"
+ local cmd = "curl -i -m ".. timeout * 2 .. " -o /dev/null -s -w %{http_code} " .. uri
+ local res = excute_cmd(cmd)
+ if res == "404" then
+ io.stderr:write(string.format("failed: please make sure that you have enabled the v2 protocol of etcd on %s.\n", host))
+ return
+ end
+ end
+
+ local etcd_ok = false
for index, host in ipairs(yaml_conf.etcd.host) do
local is_success = true
@@ -786,7 +813,7 @@ local function init_etcd(show_output)
if not res:find("index", 1, true)
and not res:find("createdIndex", 1, true) then
is_success = false
- if (index == hostCount) then
+ if (index == host_count) then
error(cmd .. "\n" .. res)
end
break
@@ -799,9 +826,14 @@ local function init_etcd(show_output)
end
if is_success then
+ etcd_ok = true
break
end
end
+
+ if not etcd_ok then
+ error("none of the configured etcd works well")
+ end
end
_M.init_etcd = init_etcd
@@ -830,13 +862,17 @@ end
function _M.reload()
local test_cmd = openresty_args .. [[ -t -q ]]
- if os.execute((test_cmd)) ~= 0 then
+ -- When success,
+ -- On linux, os.execute returns 0,
+ -- On macos, os.execute returns 3 values: true, exit, 0, and we need the first.
+ local test_ret = os.execute((test_cmd))
+ if (test_ret == 0 or test_ret == true) then
+ local cmd = openresty_args .. [[ -s reload]]
+ -- print(cmd)
+ os.execute(cmd)
return
end
-
- local cmd = openresty_args .. [[ -s reload]]
- -- print(cmd)
- os.execute(cmd)
+ print("test openresty failed")
end
function _M.version()
diff --git a/conf/cert/apisix_admin_ssl.crt b/conf/cert/apisix_admin_ssl.crt
new file mode 100644
index 0000000000000..82d7fc3aa31a5
--- /dev/null
+++ b/conf/cert/apisix_admin_ssl.crt
@@ -0,0 +1,33 @@
+-----BEGIN CERTIFICATE-----
+MIIFsTCCA5mgAwIBAgIUODyT8W4gAxf8uwMNmtj5M1ANoUwwDQYJKoZIhvcNAQEL
+BQAwVjELMAkGA1UEBhMCQ04xEjAQBgNVBAgMCUd1YW5nRG9uZzEPMA0GA1UEBwwG
+Wmh1SGFpMQ0wCwYDVQQKDARhcGk3MRMwEQYDVQQDDAphcGlzaXguZGV2MCAXDTIw
+MDYwNDAzMzc1MFoYDzIxMjAwNTExMDMzNzUwWjBWMQswCQYDVQQGEwJDTjESMBAG
+A1UECAwJR3VhbmdEb25nMQ8wDQYDVQQHDAZaaHVIYWkxDTALBgNVBAoMBGFwaTcx
+EzARBgNVBAMMCmFwaXNpeC5kZXYwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIK
+AoICAQDQveSdplH49Lr+LsLWpGJbNRhf2En0V4SuFKpzGFP7mXaI7rMnpdH3BUVY
+S3juMgPOdNh6ho4BeSbGZGfU3lG1NwIOXiPNA1mrTWGNGV97crJDVZeWTuDpqNHJ
+4ATrnF6RnRbg0en8rjVtce6LBMrDJVyGbi9VAqBUPrCmzT/l0V1jPL6KNSN8mQog
+ladrJuzUanfhWM9K9xyM+/SUt1MNUYFLNsVHasPzsi5/YDRBiwuzTtiT56O6yge2
+lvrdPFvULrCxlGteyvhtrFJwqjN//YtnQFooNR0CXBfXs0a7WGgMjawupuP1JKiY
+t9KEcGHWGZDeLfsGGKgQ9G+PaP4y+gHjLr5xQvwt68otpoafGy+BpOoHZZFoLBpx
+TtJKA3qnwyZg9zr7lrtqr8CISO/SEyh6xkAOUzb7yc2nHu9UpruzVIR7xI7pjc7f
+2T6WyCVy6gFYQwzFLwkN/3O+ZJkioxXsnwaYWDj61k3d9ozVDkVkTuxmNJjXV8Ta
+htGRAHo0/uHmpFTcaQfDf5o+iWi4z9B5kgfA/A1XWFQlCH1kl3mHKg7JNCN9qGF8
+rG+YzdiLQfo5OqJSvzGHRXbdGI2JQe/zyJHsMO7d0AhwXuPOWGTTAODOPlaBCxNB
+AgjuUgt+3saqCrK4eaOo8sPt055AYJhZlaTH4EeD4sv7rJGm7wIDAQABo3UwczAd
+BgNVHQ4EFgQUPS1LXZMqgQvH/zQHHzgTzrd7PIIwHwYDVR0jBBgwFoAUPS1LXZMq
+gQvH/zQHHzgTzrd7PIIwDAYDVR0TBAUwAwEB/zAjBgNVHREEHDAaggphcGlzaXgu
+ZGV2ggwqLmFwaXNpeC5kZXYwDQYJKoZIhvcNAQELBQADggIBAMlwNS8uo3JkkshI
+rpYobdjCZfr74PBl+LhoihvzHs25/in3+CxETRA8cYo5pRotqdA63po3wiCCPs6a
+mZiELQxyGHhFcqoYxnoURR4nyogRZLA6jjLGkbG4H+CA4ApmZmvGnP3X5uQW4v5q
+IdqIXL3BvoUBln8GMEC7Rz5SGUjWG03JPkl6MdeziFyHkwdBCOrtK5m7icRncvq+
+iL8CMUx024LLI6A5hTBPwfVfgbWJTSv7tEu85q54ZZoYQhiD8dde4D7g5/noPvXM
+ZyA9C3Sl981+pUhhazad9j9k8DCcqf9e8yH9lPY26tjiEcShv4YnwbErWzJU1F9s
+ZI5Z6nj5PU66upnBWAWV7fWCOrlouB4GjNaznSNrmpn4Bb2+FinDK3t4AfWDPS5s
+ljQBGQNXOd30DC7BdNAF5dQAUhVfz1EgQGqYa+frMQLiv8rNMs7h6gKQEqU+jC/1
+jbGe4/iwc0UeTtSgTPHMofqjqc99/R/ZqtJ3qFPJmoWpyu0NlNINw2KWRQaMoGLo
+WgDCS0YA5/hNXVFcWnZ73jY62yrVSoj+sFbkUpGWhEFnO+uSmBv8uwY3UeCOQDih
+X7Yazs3TZRqEPU+25QATf0kbxyzlWbGkwvyRD8x+n3ZHs5Ilhrc6jWHqM/S3ir7i
+m9GcWiwg++EbusQsqs3w3uKAHAdT
+-----END CERTIFICATE-----
diff --git a/conf/cert/apisix_admin_ssl.key b/conf/cert/apisix_admin_ssl.key
new file mode 100644
index 0000000000000..ec889056ffb63
--- /dev/null
+++ b/conf/cert/apisix_admin_ssl.key
@@ -0,0 +1,51 @@
+-----BEGIN RSA PRIVATE KEY-----
+MIIJKQIBAAKCAgEA0L3knaZR+PS6/i7C1qRiWzUYX9hJ9FeErhSqcxhT+5l2iO6z
+J6XR9wVFWEt47jIDznTYeoaOAXkmxmRn1N5RtTcCDl4jzQNZq01hjRlfe3KyQ1WX
+lk7g6ajRyeAE65xekZ0W4NHp/K41bXHuiwTKwyVchm4vVQKgVD6wps0/5dFdYzy+
+ijUjfJkKIJWnaybs1Gp34VjPSvccjPv0lLdTDVGBSzbFR2rD87Iuf2A0QYsLs07Y
+k+ejusoHtpb63Txb1C6wsZRrXsr4baxScKozf/2LZ0BaKDUdAlwX17NGu1hoDI2s
+Lqbj9SSomLfShHBh1hmQ3i37BhioEPRvj2j+MvoB4y6+cUL8LevKLaaGnxsvgaTq
+B2WRaCwacU7SSgN6p8MmYPc6+5a7aq/AiEjv0hMoesZADlM2+8nNpx7vVKa7s1SE
+e8SO6Y3O39k+lsglcuoBWEMMxS8JDf9zvmSZIqMV7J8GmFg4+tZN3faM1Q5FZE7s
+ZjSY11fE2obRkQB6NP7h5qRU3GkHw3+aPolouM/QeZIHwPwNV1hUJQh9ZJd5hyoO
+yTQjfahhfKxvmM3Yi0H6OTqiUr8xh0V23RiNiUHv88iR7DDu3dAIcF7jzlhk0wDg
+zj5WgQsTQQII7lILft7GqgqyuHmjqPLD7dOeQGCYWZWkx+BHg+LL+6yRpu8CAwEA
+AQKCAgBNsbBLAWHXYPfMrgj1LUAypIOLAQ0dtgl7ZdO/fRmdNxSIiRgDtNN+tuaF
+o6nCNrl1+cWtbTGj2L0W8L442/rbkTrhsCZxI0MX4HhjtUL1xs4VA+GlH3zVW3Gi
+SxBpxczpM+gVC+ykkQ7vyo04DzONCPX0T0Ssxop4cND9dL3Iw3GYAz8EYBzyPmAn
+mqwy1M0nju1J4e1eALYOv6TcSZPPDDwsi5lIKLQAm5x06pDoqGFVfw5blsc5OgM+
+8dkzyUiApFQ99Hk2UiO/ZnlU1/TNOcjOSISGHKbMfwycy2yTRKeNrJmez51fXCKo
+nRrtEotHzkI+gCzDqx+7F9ACN9kM4f4JO5ca0/My6tCY+mH8TA/nVzMnUpL7329w
+NobuNTpyA6x5nmB3QqElrzQCRtTj7Nw5ytMdRbByJhXww9C5tajUysdq8oGoZdz5
+94kXr6qCC5Qm3CkgyF2RjqZyg9tHUEEdaFKouHgziiqG9P2Nk1SHk7Jd7bF4rleI
+i93u/f0fdVK7aMksofgUbOmfhnS+o1NxerVcbdX+E/iv6yfkrYDb46y3//4dcpwk
+TeUEMCjc7ShwvYPq350q3jmzgwxeTK8ZdXwJymdJ7MaGcnMXPqd9A43evYM6nG6f
+i3l2tYhH4cp6misGChnGORR68qsRkY8ssvSFNFzjcFHhnPyoCQKCAQEA8isIC1IJ
+Iq9kB4mDVh0QdiuoBneNOEHy/8fASeZsqedu0OZPyoXU96iOhXuqf8sQ33ydvPef
+iRwasLLkgw8sDeWILUjS36ZzwGP2QNxWfrapCFS8VfKl7hTPMVp0Wzxh8qqpGLSh
+O0W7EEAJCgzzULagfupaO0Chmb3LZqXRp8m5oubnmE+9z0b5GrCIT1S8Yay2mEw9
+jxqZJGBhV7QnupyC2DIxLXlGmQk7Qs1+1mCCFwyfugHXclWYa+fet/79SkkADK0/
+ysxfy+FdZgGT/Ba5odsEpt1zH+tw4WXioJsX9mU3zAHbpPqtcfuVU+2xyKfQYrRG
+NSm9MMNmart0wwKCAQEA3Koaj/0gNxLLslLIES50KmmagzU8CkEmCa/WLoVy02xr
+qp42hvj+PzBTf3rIno3KEpRhMmnAtswozbV3P4l/VSZdfY+pwWsx7/5+Cf1R9nAP
+vp6YCjGcLcbASazYNOWf0FRInt3pxdgT9DWjJDi99FGKA+UbI2yxHwzE+cE8r9Od
+Iy42uhzCjJBqdg+an+q63k6yrOwv18KP69LlU/4vknhw4g3WxF4yTwVmXU8WKmux
+aOrJv2ED8pfA7k+zwv0rPyN+F2nOySxoChaFfeu6ntBCX7zK/nV0DsMQImOycfzO
+yN8WB9lRZTJVzU2r6PaGAI359uLHEmURy0069g+yZQKCAQAbECwJ99UFh0xKe1eu
+G/lm+2H/twSVMOmTJCOdHp8uLar4tYRdQa+XLcMfr75SIcN09lw6bgHqNLXW4Wcg
+LmXh97DMPsMyM0vkSEeQ4A7agldJkw6pHEDm5nRxM4alW44mrGPRWv5ZvWU2X7Gi
+6eeXMZGmHVKQJJzqrYc5pXZUpfqU9fET2HWB4JCeJvRUyUd0MvUE+CA5CePraMn4
+Hy4BcNQ+jP1p84+sMpfo00ZFduuS39pJ00LciCxMgtElBt4PmzDiOcpTQ5vBESJ6
+79o15eRA7lUKwNzIyGsJBXXaNPrskks2BU8ilNElV9RMWNfxcK+dGEBwWIXIGU4s
+x145AoIBAQCst9R8udNaaDLaTGNe126DuA8B/kwVdrLwSBqsZTXgeO+5J4dklEZl
+bU0d7hxTxoXRjySZEh+OtTSG9y/0oonxO0tYOXfU9jOrNxaueQKLk2EvgfFdoUEu
+r2/Y+xpsJQO3TBFfkDEn856Cuu0MMAG214/gxpY8XxowRI11NCRtN4S6gbTCbjp1
+TaCW8lXEMDW+Rfki0ugLyLVgD74CxWW1DuLEfbKKF3TnV0GtbXbbE1pU1dm+G5C8
+dL3FissYp5MPI5fRebcqzcBNjR1F15pGLpqVVy/IhmSmHVZmpISLJicxITScRiSo
+wgJY5R/XBAcVLgvmi9Dn/AY2jCfHa7flAoIBAQCbnZ6ivZg81g6/X9qdo9J61hX0
+Y7Fn7bLvcs1L0ARGTsfXMvegA806XyZThqjpY47nHpQtoz4z62kiTTsdpAZUeA3z
+9HUWr0b3YEpsvZpgyMNHgwq1vRDPjw4AWz0pBoDWMxx8Ck5nP1A//c1zyu9pgYEU
+R+OutDeCJ+0VAc6JSH9WMA08utGPGs3t02Zhtyt2sszE9vzz4hTi5340/AYG72p7
+YGlikUxvbyylYh9wR4YUYa/klikvKLHEML1P0BCr8Vex+wLSGS1h1F5tW1Xr2CZQ
+dVxFmfGmPDmwWbCQR6Rvt6FHRwNMpMrLr011h2RBcHBpdQl7XpUENDoopIh0
+-----END RSA PRIVATE KEY-----
diff --git a/conf/cert/openssl-test2.conf b/conf/cert/openssl-test2.conf
new file mode 100644
index 0000000000000..1e5beec911dff
--- /dev/null
+++ b/conf/cert/openssl-test2.conf
@@ -0,0 +1,40 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+[req]
+distinguished_name = req_distinguished_name
+x509_extensions = v3_req
+prompt = no
+
+[req_distinguished_name]
+C = CN
+ST = GuangDong
+L = ZhuHai
+O = iresty
+CN = test2.com
+
+[v3_req]
+subjectKeyIdentifier = hash
+authorityKeyIdentifier = keyid,issuer
+basicConstraints = CA:TRUE
+subjectAltName = @alt_names
+
+[alt_names]
+DNS.1 = test2.com
+DNS.2 = *.test2.com
+
+## openssl genrsa -out test2.key 3072
+## openssl req -new -x509 -key test2.key -sha256 -config openssl-test2.conf -out test2.crt -days 36500
diff --git a/conf/cert/test2.crt b/conf/cert/test2.crt
new file mode 100644
index 0000000000000..922a8f8b6896f
--- /dev/null
+++ b/conf/cert/test2.crt
@@ -0,0 +1,28 @@
+-----BEGIN CERTIFICATE-----
+MIIEsTCCAxmgAwIBAgIUMbgUUCYHkuKDaPy0bzZowlK0JG4wDQYJKoZIhvcNAQEL
+BQAwVzELMAkGA1UEBhMCQ04xEjAQBgNVBAgMCUd1YW5nRG9uZzEPMA0GA1UEBwwG
+Wmh1SGFpMQ8wDQYDVQQKDAZpcmVzdHkxEjAQBgNVBAMMCXRlc3QyLmNvbTAgFw0y
+MDA0MDQyMjE3NTJaGA8yMTIwMDMxMTIyMTc1MlowVzELMAkGA1UEBhMCQ04xEjAQ
+BgNVBAgMCUd1YW5nRG9uZzEPMA0GA1UEBwwGWmh1SGFpMQ8wDQYDVQQKDAZpcmVz
+dHkxEjAQBgNVBAMMCXRlc3QyLmNvbTCCAaIwDQYJKoZIhvcNAQEBBQADggGPADCC
+AYoCggGBAMQGBk35V3zaNVDWzEzVGd+EkZnUOrRpXQg5mmcnoKnrQ5rQQMsQCbMO
+gFvLt/9OEZQmbE2HuEKsPzL79Yjdu8rGjSoQdbJZ9ccO32uvln1gn68iK79o7Tvm
+TCi+BayyNA+lo9IxrBm1wGBkOU1ZPasGYzgBAbMLTSDps1EYxNR8t4l9PrTTRsh6
+NZyTYoDeVIsKZ9SckpjWVnxHOkF+AzZzIJJSe2pj572TDLYA/Xw9I4X3L+SHzwTl
+iGWNXb2tU367LHERHvensQzdle7mQN2kE5GpB7QPWB+t9V4mn30jc/LyDvOaei6L
++pbl5CriGBTjaR80oXhK765K720BQeKUezri15bQlMaUGQRnzr53ZsqA4PEh6WCX
+hUT2ibO32+uZFXzVQw8y/JUkPf76pZagi8DoLV+sfSbUtnpbQ8wyV2qqTM2eCuPi
+RgUwXQi2WssKKzrqcgKil3vksHZozLtOmyZiNE4qfNxv+UGoIybJtZmB+9spY0Rw
+5zBRuULycQIDAQABo3MwcTAdBgNVHQ4EFgQUCmZefzpizPrb3VbiIDhrA48ypB8w
+HwYDVR0jBBgwFoAUCmZefzpizPrb3VbiIDhrA48ypB8wDAYDVR0TBAUwAwEB/zAh
+BgNVHREEGjAYggl0ZXN0Mi5jb22CCyoudGVzdDIuY29tMA0GCSqGSIb3DQEBCwUA
+A4IBgQA0nRTv1zm1ACugJFfYZfxZ0mLJfRUCFMmFfhy+vGiIu6QtnOFVw/tEOyMa
+m78lBiqac15n3YWYiHiC5NFffTZ7XVlOjN2i4x2z2IJsHNa8tU80AX0Q/pizGK/d
++dzlcsGBb9MGT18h/B3/EYQFKLjUsr0zvDb1T0YDlRUsN3Bq6CvZmvfe9F7Yh4Z/
+XO5R+rX8w9c9A2jzM5isBw2qp/Ggn5RQodMwApEYkJdu80MuxaY6s3dssS4Ay8wP
+VNFEeLcdauJ00ES1OnbnuNiYSiSMOgWBsnR+c8AaSRB/OZLYQQKGGYbq0tspwRjM
+MGJRrI/jdKnvJQ8p02abdvA9ZuFChoD3Wg03qQ6bna68ZKPd9peBPpMrDDGDLkGI
+NzZ6bLJKILnQkV6b1OHVnPDsKXfXjUTTNK/QLJejTXu9RpMBakYZMzs/SOSDtFlS
+A+q25t6+46nvA8msUSBKyOGBX42mJcKvR4OgG44PfDjYfmjn2l+Dz/jNXDclpb+Q
+XAzBnfM=
+-----END CERTIFICATE-----
diff --git a/conf/cert/test2.key b/conf/cert/test2.key
new file mode 100644
index 0000000000000..c25d4e5bde9e4
--- /dev/null
+++ b/conf/cert/test2.key
@@ -0,0 +1,39 @@
+-----BEGIN RSA PRIVATE KEY-----
+MIIG5QIBAAKCAYEAxAYGTflXfNo1UNbMTNUZ34SRmdQ6tGldCDmaZyegqetDmtBA
+yxAJsw6AW8u3/04RlCZsTYe4Qqw/Mvv1iN27ysaNKhB1sln1xw7fa6+WfWCfryIr
+v2jtO+ZMKL4FrLI0D6Wj0jGsGbXAYGQ5TVk9qwZjOAEBswtNIOmzURjE1Hy3iX0+
+tNNGyHo1nJNigN5Uiwpn1JySmNZWfEc6QX4DNnMgklJ7amPnvZMMtgD9fD0jhfcv
+5IfPBOWIZY1dva1TfrsscREe96exDN2V7uZA3aQTkakHtA9YH631XiaffSNz8vIO
+85p6Lov6luXkKuIYFONpHzSheErvrkrvbQFB4pR7OuLXltCUxpQZBGfOvndmyoDg
+8SHpYJeFRPaJs7fb65kVfNVDDzL8lSQ9/vqllqCLwOgtX6x9JtS2eltDzDJXaqpM
+zZ4K4+JGBTBdCLZayworOupyAqKXe+SwdmjMu06bJmI0Tip83G/5QagjJsm1mYH7
+2yljRHDnMFG5QvJxAgMBAAECggGBAIELlkruwvGmlULKpWRPReEn3NJwLNVoJ56q
+jUMri1FRWAgq4PzNahU+jrHfwxmHw3rMcK/5kQwTaOefh1y63E35uCThARqQroSE
+/gBeb6vKWFVrIXG5GbQ9QBXyQroV9r/2Q4q0uJ+UTzklwbNx9G8KnXbY8s1zuyrX
+rvzMWYepMwqIMSfJjuebzH9vZ4F+3BlMmF4XVUrYj8bw/SDwXB0UXXT2Z9j6PC1J
+CS0oKbgIZ8JhoF3KKjcHBGwWTIf5+byRxeG+z99PBEBafm1Puw1vLfOjD3DN/fso
+8xCEtD9pBPBJ+W97x/U+10oKetmP1VVEr2Ph8+s2VH1zsRF5jo5d0GtvJqOwIQJ7
+z3OHJ7lLODw0KAjB1NRXW4dTTUDm6EUuUMWFkGAV6YTyhNLAT0DyrUFJck9RiY48
+3QN8vSf3n/+3wwg1gzcJ9w3W4DUbvGqu86CaUQ4UegfYJlusY/3YGp5bGNQdxmws
+lgIoSRrHp6UJKsP8Yl08MIvT/oNLgQKBwQD75SuDeyE0ukhEp0t6v+22d18hfSef
+q3lLWMI1SQR9Kiem9Z1KdRkIVY8ZAHANm6D8wgjOODT4QZtiqJd2BJn3Xf+aLfCd
+CW0hPvmGTcp/E4sDZ2u0HbIrUStz7ZcgXpjD2JJAJGEKY2Z7J65gnTqbqoBDrw1q
+1+FqtikkHRte1UqxjwnWBpSdoRQFgNPHxPWffhML1xsD9Pk1B1b7JoakYcKsNoQM
+oXUKPLxSZEtd0hIydqmhGYTa9QWBPNDlA5UCgcEAxzfGbOrPBAOOYZd3jORXQI6p
+H7SddTHMQyG04i+OWUd0HZFkK7/k6r26GFmImNIsQMB26H+5XoKRFKn+sUl14xHY
+FwB140j0XSav2XzT38UpJ9CptbgK1eKGQVp41xwRYjHVScE5hJuA3a1TKM0l26rp
+hny/KaP+tXuqt9QbxcUN6efubNYyFP+m6nq2/XdX74bJuGpXLq8W0oFdiocO6tmF
+4/Hsc4dCVrcwULqXQa0lJ57zZpfIPARqWM2847xtAoHBANVUNbDpg6rTJMc34722
+dAy3NhL3mqooH9aG+hsEls+l9uT4WFipqSScyU8ERuHPbt0BO1Hi2kFx1rYMUBG8
+PeT4b7NUutVUGV8xpUNv+FH87Bta6CUnjTAQUzuf+QCJ/NjIPrwh0yloG2+roIvk
+PLF/CZfI1hUpdZfZZChYmkiLXPHZURw4gH6q33j1rOYf0WFc9aZua0vDmZame6zB
+6P+oZ6VPmi/UQXoFC/y/QfDYK18fjfOI2DJTlnDoX4XErQKBwGc3M5xMz/MRcJyJ
+oIwj5jzxbRibOJV2tpD1jsU9xG/nQHbtVEwCgTVKFXf2M3qSMhFeZn0xZ7ZayZY+
+OVJbcDO0lBPezjVzIAB/Qc7aCOBAQ4F4b+VRtHN6iPqlSESTK0KH9Szgas+UzeCM
+o7BZEctNMu7WBSkq6ZXXu+zAfZ8q6HmPDA3hsFMG3dFQwSxzv+C/IhZlKkRqvNVV
+50QVk5oEF4WxW0PECY/qG6NH+YQylDSB+zPlYf4Of5cBCWOoxQKBwQCeo37JpEAR
+kYtqSjXkC5GpPTz8KR9lCY4SDuC1XoSVCP0Tk23GX6GGyEf4JWE+fb/gPEFx4Riu
+7pvxRwq+F3LaAa/FFTNUpY1+8UuiMO7J0B1RkVXkyJjFUF/aQxAnOoZPmzrdZhWy
+bpe2Ka+JS/aXSd1WRN1nmo/DarpWFvdLWZFwUt6zMziH40o1gyPHEuXOqVtf2QCe
+Q6WC9xnEz4lbb/fR2TF9QRA4FtoRpDe/f3ZGIpWE0RdwyZZ6uA7T1+Q=
+-----END RSA PRIVATE KEY-----
diff --git a/conf/config.yaml b/conf/config.yaml
index 8f691d59253b2..eba48dfa992c8 100644
--- a/conf/config.yaml
+++ b/conf/config.yaml
@@ -54,6 +54,8 @@ apisix:
# - 127.0.0.0/24 # If we don't set any IP list, then any IP access is allowed by default.
# - "::/64"
# port_admin: 9180 # use a separate port
+ # https_admin: true # enable HTTPS when use a separate port for Admin API.
+ # Admin API will use conf/apisix_admin_api.crt and conf/apisix_admin_api.key as certificate.
# Default token when use API to call for Admin API.
# *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
@@ -89,9 +91,12 @@ apisix:
enable: true
enable_http2: true
listen_port: 9443
- ssl_protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
- ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"
-
+ ssl_protocols: "TLSv1.2 TLSv1.3"
+ ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
+ key_encrypt_salt: "edd1c9f0985e76a2" # If not set, will save origin ssl key into etcd.
+ # If set this, must be a string of length 16. And it will encrypt ssl key with AES-128-CBC
+ # !!! So do not change it after saving your ssl, it can't decrypt the ssl keys have be saved if you change !!
+# discovery: eureka # service discovery center
nginx_config: # config for render the template to genarate nginx.conf
error_log: "logs/error.log"
error_log_level: "warn" # warn,error
@@ -116,7 +121,19 @@ etcd:
host: # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
- "http://etcd:2379" # multiple etcd address
prefix: "/apisix" # apisix configurations prefix
- timeout: 3 # 3 seconds
+ timeout: 30 # 3 seconds
+ # user: root # root username for etcd
+ # password: 5tHkHhYkjr6cQY # root password for etcd
+#eureka:
+# host: # it's possible to define multiple eureka hosts addresses of the same eureka cluster.
+# - "http://127.0.0.1:8761"
+# prefix: "/eureka/"
+# fetch_interval: 30 # default 30s
+# weight: 100 # default weight for node
+# timeout:
+# connect: 2000 # default 2000ms
+# send: 2000 # default 2000ms
+# read: 5000 # default 5000ms
plugins: # plugin list
- example-plugin
@@ -145,6 +162,14 @@ plugins: # plugin list
- proxy-mirror
- kafka-logger
- cors
+ - consumer-restriction
+ - syslog
- batch-requests
+ - http-logger
+ - skywalking
+ - echo
+ - authz-keycloak
+ - uri-blocker
+
stream_plugins:
- mqtt-proxy
diff --git a/dashboard b/dashboard
index cfb3ee7b87210..329b092dcaa7a 160000
--- a/dashboard
+++ b/dashboard
@@ -1 +1 @@
-Subproject commit cfb3ee7b8721076975c1deaff3e52da3ea4a312a
+Subproject commit 329b092dcaa7a505dcdec86c667b6803f5863d94
diff --git a/doc/README.md b/doc/README.md
index 238e1983cb1f1..c9a8f95b41f64 100644
--- a/doc/README.md
+++ b/doc/README.md
@@ -16,19 +16,19 @@
# limitations under the License.
#
-->
-[Chinese](README_CN.md)
+
+[Chinese](./zh-cn/README.md)
Reference Documentation
==================
-* [APISIX Readme](../README.md)
+* [APISIX Readme](./README.md)
* [Architecture Design](architecture-design.md)
* [Benchmark](benchmark.md)
* [Getting Started Guide](getting-started.md)
* [How to build Apache APISIX](how-to-build.md)
* [Health Check](health-check.md): Enable health check on the upstream node, and will automatically filter unhealthy nodes during load balancing to ensure system stability.
-* Router
- * [radixtree](router-radixtree.md)
+* [Router radixtree](router-radixtree.md)
* [Stand Alone Model](stand-alone.md): Supports to load route rules from local yaml file, it is more friendly such as under the kubernetes(k8s).
* [Stream Proxy](stream-proxy.md)
* [Admin API](admin-api.md)
@@ -51,7 +51,7 @@ Plugins
* [proxy-rewrite](plugins/proxy-rewrite.md): Rewrite upstream request information.
* [prometheus](plugins/prometheus.md): Expose metrics related to APISIX and proxied upstream services in Prometheus exposition format, which can be scraped by a Prometheus Server.
* [OpenTracing](plugins/zipkin.md): Supports Zikpin and Apache SkyWalking.
-* [grpc-transcode](plugins/grpc-transcoding.md): REST <--> gRPC transcoding.
+* [grpc-transcode](plugins/grpc-transcode.md): REST <--> gRPC transcoding.
* [serverless](plugins/serverless.md):Allows to dynamically run Lua code at *different* phase in APISIX.
* [ip-restriction](plugins/ip-restriction.md): IP whitelist/blacklist.
* [openid-connect](plugins/oauth.md)
@@ -65,11 +65,20 @@ Plugins
* [kafka-logger](plugins/kafka-logger.md): Log requests to External Kafka servers.
* [cors](plugins/cors.md): Enable CORS(Cross-origin resource sharing) for your API.
* [batch-requests](plugins/batch-requests.md): Allow you send mutiple http api via **http pipeline**.
+* [authz-keycloak](plugins/authz-keycloak.md): Authorization with Keycloak Identity Server.
+* [uri-blocker](plugins/uri-blocker.md): Block client request by URI.
+* [oauth](plugins/oauth.md): Provides OAuth 2 authentication and introspection.
-Deploy to the Cloud
+Deploy
=======
+
### AWS
The recommended approach is to deploy APISIX with [AWS CDK](https://aws.amazon.com/cdk/) on [AWS Fargate](https://aws.amazon.com/fargate/) which helps you decouple the APISIX layer and the upstream layer on top of a fully-managed and secure serverless container compute environment with autoscaling capabilities.
See [this guide](https://github.com/pahud/cdk-samples/blob/master/typescript/apisix/README.md) by [Pahud Hsieh](https://github.com/pahud) and learn how to provision the recommended architecture 100% in AWS CDK.
+
+### Kubernetes
+
+See [this guide](../kubernetes/README.md) and learn how to deploy apisix in Kubernetes.
+
diff --git a/doc/README_CN.md b/doc/README_CN.md
deleted file mode 100644
index 1fc08c5abccd9..0000000000000
--- a/doc/README_CN.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-[English](README.md)
-
-Reference document
-==================
-
-* [APISIX 说明](../README_CN.md)
-* [架构设计](architecture-design-cn.md)
-* [压力测试](benchmark-cn.md)
-* [如何构建 Apache APISIX](how-to-build-cn.md)
-* [健康检查](health-check.md): 支持对上游节点的主动和被动健康检查,在负载均衡时自动过滤掉不健康的节点。
-* Router(路由)
- * [radixtree](router-radixtree.md)
- * [r3](router-r3.md)
-* [独立运行模型](stand-alone-cn.md): 支持从本地 yaml 格式的配置文件启动,更适合 Kubernetes(k8s) 体系。
-* [TCP/UDP 动态代理](stream-proxy-cn.md)
-* [管理 API](admin-api-cn.md)
-* [变更日志](../CHANGELOG_CN.md)
-* [代码风格](../CODE_STYLE.md)
-* [常见问答](../FAQ_CN.md)
-
-插件
-===
-
-* [插件热加载](plugins-cn.md):无需重启服务,完成插件热加载或卸载。
-* [HTTPS](https-cn.md):根据 TLS 扩展字段 SNI(Server Name Indication) 动态加载证书。
-* [动态负载均衡](architecture-design-cn.md#upstream):跨多个上游服务的动态负载均衡,目前已支持 round-robin 和一致性哈希算法。
-* [key-auth](plugins/key-auth-cn.md):基于 Key Authentication 的用户认证。
-* [JWT-auth](plugins/jwt-auth-cn.md):基于 [JWT](https://jwt.io/) (JSON Web Tokens) Authentication 的用户认证。
-* [basic-auth](plugins/basic-auth-cn.md):基于 basic auth 的用户认证。
-* [wolf-rbac](plugins/wolf-rbac-cn.md) 基于 *RBAC* 的用户认证及授权。
-* [limit-count](plugins/limit-count-cn.md):基于“固定窗口”的限速实现。
-* [limit-req](plugins/limit-req-cn.md):基于漏桶原理的请求限速实现。
-* [limit-conn](plugins/limit-conn-cn.md):限制并发请求(或并发连接)。
-* [proxy-rewrite](plugins/proxy-rewrite-cn.md): 支持自定义修改 proxy 到上游的信息。
-* [prometheus](plugins/prometheus-cn.md):以 Prometheus 格式导出 APISIX 自身的状态信息,方便被外部 Prometheus 服务抓取。
-* [OpenTracing](plugins/zipkin-cn.md):支持 Zikpin 和 Apache SkyWalking。
-* [grpc-transcode](plugins/grpc-transcoding-cn.md):REST <--> gRPC 转码。
-* [serverless](plugins/serverless-cn.md):允许在 APISIX 中的不同阶段动态运行 Lua 代码。
-* [ip-restriction](plugins/ip-restriction-cn.md): IP 黑白名单。
-* [openid-connect](plugins/oauth.md)
-* [redirect](plugins/redirect-cn.md): URI 重定向。
-* [response-rewrite](plugins/response-rewrite-cn.md): 支持自定义修改返回内容的 `status code`、`body`、`headers`。
-* [fault-injection](plugins/fault-injection-cn.md):故障注入,可以返回指定的响应体、响应码和响应时间,从而提供了不同的失败场景下处理的能力,例如服务失败、服务过载、服务高延时等。
-* [proxy-cache](plugins/proxy-cache-cn.md):代理缓存插件提供缓存后端响应数据的能力。
-* [proxy-mirror](plugins/proxy-mirror-cn.md):代理镜像插件提供镜像客户端请求的能力。
-* [udp-logger](plugins/udp-logger.md): 将请求记录到UDP服务器
-* [tcp-logger](plugins/tcp-logger.md): 将请求记录到TCP服务器
-* [kafka-logger](plugins/kafka-logger-cn.md): 将请求记录到外部Kafka服务器。
-* [cors](plugins/cors-cn.md): 为你的API启用CORS.
-* [batch-requests](plugins/batch-requests-cn.md): 以 **http pipeline** 的方式在网关一次性发起多个 `http` 请求。
diff --git a/doc/_navbar.md b/doc/_navbar.md
new file mode 100644
index 0000000000000..1612e7d8a96ec
--- /dev/null
+++ b/doc/_navbar.md
@@ -0,0 +1,22 @@
+
+
+- Translations
+ - [:uk: English](/)
+ - [:cn: 中文](/zh-cn/)
diff --git a/doc/_sidebar.md b/doc/_sidebar.md
new file mode 100644
index 0000000000000..6a9c1e5511679
--- /dev/null
+++ b/doc/_sidebar.md
@@ -0,0 +1,103 @@
+
+
+- Getting started
+
+ - [Introduction](README.md)
+ - [Quick start](getting-started.md)
+
+- General
+
+ - [Architecture](architecture-design.md)
+
+ - [Benchmark](benchmark.md)
+
+ - Installation
+
+ - [How to build](how-to-build.md)
+ - [Install Dependencies](install-dependencies.md)
+
+ - [HTTPS](https.md)
+
+ - [Router](router-radixtree.md)
+
+ - Plugins
+
+ - [Develop Plugins](plugin-develop.md)
+ - [Hot Reload](plugins.md)
+
+ - Proxy Modes
+
+ - [GRPC Proxy](grpc-proxy.md)
+ - [Stream Proxy](stream-proxy.md)
+
+- Plugins
+
+ - Authentication
+
+ - [Key Auth](plugins/key-auth.md)
+ - [Basic Auth](plugins/basic-auth.md)
+ - [JWT Auth](plugins/jwt-auth.md)
+ - [Opend ID Connect](plugins/oauth.md)
+
+ - General
+
+ - [Redirect](plugins/redirect.md)
+ - [Serverless](plugins/serverless.md)
+ - [Batch Request](plugins/batch-requests.md)
+ - [Fault Injection](plugins/fault-injection.md)
+ - [MQTT Proxy](plugins/mqtt-proxy.md)
+ - [Proxy Cache](plugins/proxy-cache.md)
+ - [Proxy Mirror](plugins/proxy-mirror.md)
+ - [Echo](plugins/echo.md)
+
+ - Transformations
+
+ - [Response Rewrite](plugins/response-rewrite.md)
+ - [Proxy Rewrite](plugins/proxy-rewrite.md)
+ - [GRPC Transcoding](plugins/grpc-transcode.md)
+
+ - Security
+
+ - [Consumer Restriction](plugins/consumer-restriction.md)
+ - [Limit Connection](plugins/limit-conn.md)
+ - [Limit Count](plugins/limit-count.md)
+ - [Limit Request](plugins/limit-req.md)
+ - [CORS](plugins/cors.md)
+ - [IP Restriction](plugins/ip-restriction.md)
+ - [Keycloak Authorization](plugins/authz-keycloak.md)
+ - [RBAC Wolf](plugins/wolf-rbac.md)
+
+ - Monitoring
+
+ - [Prometheus](plugins/prometheus.md)
+ - [SKywalking](plugins/skywalking.md)
+ - [Zipkin](plugins/zipkin.md)
+
+ - Loggers
+
+ - [HTTP Logger](plugins/http-logger.md)
+ - [Kafka Logger](plugins/kafka-logger.md)
+ - [Syslog](plugins/syslog.md)
+ - [TCP Logger](plugins/tcp-logger.md)
+ - [UDP Logger](plugins/udp-logger.md)
+
+- Admin API
+
+ - [Admin API](admin-api.md)
diff --git a/doc/admin-api.md b/doc/admin-api.md
index f60ccb6aef0ea..b1112e9ab2385 100644
--- a/doc/admin-api.md
+++ b/doc/admin-api.md
@@ -19,8 +19,6 @@
# Table of Contents
-===
-
* [Route](#route)
* [Service](#service)
* [Consumer](#consumer)
@@ -41,7 +39,7 @@
|PUT |/apisix/admin/routes/{id}|{...}|Create resource by ID|
|POST |/apisix/admin/routes |{...}|Create resource, and ID is generated by server|
|DELETE |/apisix/admin/routes/{id}|NULL|Remove resource|
-|PATCH |/apisix/admin/routes/{id}/{path}|{...}|Update targeted content|
+|PATCH |/apisix/admin/routes/{id}|{...}|Update targeted content, if you want to remove an attribute, set the attribute value to null to remove|
> URI Request Parameters:
@@ -53,7 +51,8 @@
|Parameter |Required |Type |Description |Example|
|---------|---------|----|-----------|----|
-|desc |False |Auxiliary |Identifies route names, usage scenarios, and more.|customer xxxx|
+|name |False |Auxiliary |Identifies route names.|customer-xxxx|
+|desc |False |Auxiliary |route description, usage scenarios, and more.|customer xxxx|
|uri |True |Match Rules|In addition to full matching such as `/foo/bar`、`/foo/gloo`, using different [Router](architecture-design.md#router) allows more advanced matching, see [Router](architecture-design.md#router) for more.|"/hello"|
|host |False |Match Rules|Currently requesting a domain name, such as `foo.com`; pan-domain names such as `*.foo.com` are also supported.|"foo.com"|
|hosts |False |Match Rules|The `host` in the form of a list means that multiple different hosts are allowed, and match any one of them.|{"foo.com", "*.bar.com"}|
@@ -61,7 +60,7 @@
|remote_addrs|False |Match Rules|The `remote_addr` in the form of a list indicates that multiple different IP addresses are allowed, and match any one of them.|{"127.0.0.1", "192.0.0.0/8", "::1"}|
|methods |False |Match Rules|If empty or without this option, there are no `method` restrictions, and it can be a combination of one or more: `GET`,`POST`,`PUT`,`DELETE`,`PATCH`, `HEAD`,`OPTIONS`,`CONNECT`,`TRACE`.|{"GET", "POST"}|
|priority |False |Match Rules|If different routes contain the same `uri`, determine which route is matched first based on the attribute` priority`. Larger value means higher priority. The default value is 0.|priority = 10|
-|vars |False |Match Rules |A list of one or more `{var, operator, val}` elements, like this: `{{var, operator, val}, {var, operator, val}, ...}`. For example: `{"arg_name", "==", "json"}` means that the current request parameter `name` is `json`. The `var` here is consistent with the internal variable name of Nginx, so you can also use `request_uri`, `host`, etc. For the operator part, the currently supported operators are `==`, `~=`,`>`, `<`, and `~~`. For the `>` and `<` operators, the result is first converted to `number` and then compared. See a list of [supported operators](#available-operators) |{{"arg_name", "==", "json"}, {"arg_age", ">", 18}}|
+|vars |False |Match Rules |A list of one or more `{var, operator, val}` elements, like this: `{{var, operator, val}, {var, operator, val}, ...}}`. For example: `{"arg_name", "==", "json"}` means that the current request parameter `name` is `json`. The `var` here is consistent with the internal variable name of Nginx, so you can also use `request_uri`, `host`, etc. For the operator part, the currently supported operators are `==`, `~=`,`>`, `<`, and `~~`. For the `>` and `<` operators, the result is first converted to `number` and then compared. See a list of [supported operators](#available-operators) |{{"arg_name", "==", "json"}, {"arg_age", ">", 18}}|
|filter_func|False|Match Rules|User-defined filtering function. You can use it to achieve matching requirements for special scenarios. This function accepts an input parameter named `vars` by default, which you can use to get Nginx variables.|function(vars) return vars["arg_name"] == "json" end|
|plugins |False |Plugin|See [Plugin](architecture-design.md#plugin) for more ||
|upstream |False |Upstream|Enabled Upstream configuration, see [Upstream](architecture-design.md#upstream) for more||
@@ -83,6 +82,7 @@ Config Example:
"hosts": ["a.com","b.com"], # A set of host. Host and hosts only need to be non-empty one.
"plugins": {}, # Bound plugin
"priority": 0, # If different routes contain the same `uri`, determine which route is matched first based on the attribute` priority`, the default value is 0.
+ "name": "route-xxx",
"desc": "hello world",
"remote_addr": "127.0.0.1", # Client IP
"remote_addrs": ["127.0.0.1"], # A set of Client IP. Remote_addr and remo-te_addrs only need to be non-empty one.
@@ -183,7 +183,7 @@ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f13
|PUT |/apisix/admin/services/{id}|{...}|Create resource by ID|
|POST |/apisix/admin/services |{...}|Create resource, and ID is generated by server|
|DELETE |/apisix/admin/services/{id}|NULL|Remove resource|
-|PATCH |/apisix/admin/routes/{id}/{path}|{...}|Update targeted content|
+|PATCH |/apisix/admin/routes/{id}|{...}|Update targeted content, if you want to remove an attribute, set the attribute value to null to remove|
> Request Body Parameters:
@@ -192,7 +192,8 @@ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f13
|plugins |False |Plugin|See [Plugin](architecture-design.md#plugin) for more ||
|upstream |False |Upstream|Enabled Upstream configuration, see [Upstream](architecture-design.md#upstream) for more||
|upstream_id|False |Upstream|Enabled upstream id, see [Upstream](architecture-design.md#upstream) for more ||
-|desc |False |Auxiliary |Identifies route names, usage scenarios, and more.|customer xxxx|
+|name |False |Auxiliary |Identifies service names.|customer-xxxx|
+|desc |False |Auxiliary |service usage scenarios, and more.|customer xxxx|
Config Example:
@@ -202,6 +203,7 @@ Config Example:
"plugins": {}, # Bound plugin
"upstream_id": "1", # upstream id, recommended
"upstream": {}, # upstream, not recommended
+ "name": "service-test",
"desc": "hello world",
}
```
@@ -209,7 +211,7 @@ Config Example:
Example:
```shell
-$ curl http://127.0.0.1:9080/apisix/admin/services/201 -X PUT -i -d '
+$ curl http://127.0.0.1:9080/apisix/admin/services/201 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
{
"plugins": {
"limit-count": {
@@ -286,7 +288,7 @@ The binding authentication and authorization plug-in is a bit special. When it n
Example:
```shell
-$ curl http://127.0.0.1:9080/apisix/admin/consumers/2 -X PUT -i -d '
+$ curl http://127.0.0.1:9080/apisix/admin/consumers/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
{
"username": "jack",
"plugins": {
@@ -328,7 +330,7 @@ Return response from etcd currently.
|PUT |/apisix/admin/upstreams/{id}|{...}|Create resource by ID|
|POST |/apisix/admin/upstreams |{...}|Create resource, and ID is generated by server|
|DELETE |/apisix/admin/upstreams/{id}|NULL|Remove resource|
-|PATCH |/apisix/admin/upstreams/{id}/{path}|{...}|Update targeted content|
+|PATCH |/apisix/admin/upstreams/{id}|{...}|Update targeted content, if you want to remove an attribute, set the attribute value to null to remove|
> Request Body Parameters:
@@ -345,7 +347,8 @@ In addition to the basic complex equalization algorithm selection, APISIX's Upst
|retries |optional|Pass the request to the next upstream using the underlying Nginx retry mechanism, the retry mechanism is enabled by default and set the number of retries according to the number of backend nodes. If `retries` option is explicitly set, it will override the default value.|
|enable_websocket|optional| enable `websocket`(boolean), default `false`.|
|timeout|optional| Set the timeout for connection, sending and receiving messages. |
-|desc |optional|Identifies route names, usage scenarios, and more.|
+|name |optional|Identifies upstream names|
+|desc |optional|upstream usage scenarios, and more.|
Config Example:
@@ -371,6 +374,7 @@ Config Example:
"checks": {}, # Health check parameters
"hash_on": "",
"key": "",
+ "name": "upstream-for-test",
"desc": "hello world",
}
```
@@ -378,15 +382,15 @@ Config Example:
Example:
```shell
-$ curl http://127.0.0.1:9080/apisix/admin/upstreams/100 -i -X PUT -d '
-> {
-> "type": "roundrobin",
-> "nodes": {
-> "127.0.0.1:80": 1,
-> "127.0.0.2:80": 2,
-> "foo.com:80": 3
-> }
-> }'
+$ curl http://127.0.0.1:9080/apisix/admin/upstreams/100 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -i -X PUT -d '
+{
+ "type":"roundrobin",
+ "nodes":{
+ "127.0.0.1:80":1,
+ "127.0.0.2:80":2,
+ "foo.com:80":3
+ }
+}'
HTTP/1.1 201 Created
Date: Thu, 26 Dec 2019 04:19:34 GMT
Content-Type: text/plain
diff --git a/doc/architecture-design.md b/doc/architecture-design.md
index 9edaadb378b45..098808e1616d8 100644
--- a/doc/architecture-design.md
+++ b/doc/architecture-design.md
@@ -17,7 +17,7 @@
#
-->
-[Chinese](architecture-design-cn.md)
+[Chinese](zh-cn/architecture-design.md)
## Table of Contents
@@ -106,7 +106,7 @@ Server: APISIX web server
When we receive a successful response, it indicates that the route was successfully created.
-For specific options of Route, please refer to [Admin API](admin-api-cn.md#route).
+For specific options of Route, please refer to [Admin API](zh-cn/admin-api.md#route).
[Back to top](#Table-of-contents)
@@ -233,8 +233,9 @@ In addition to the basic complex equalization algorithm selection, APISIX's Upst
|Name |Optional|Description|
|------- |-----|------|
|type |required|`roundrobin` supports the weight of the load, `chash` consistency hash, pick one of them.|
-|nodes |required if `k8s_deployment_info` not configured|Hash table, the key of the internal element is the upstream machine address list, the format is `Address + Port`, where the address part can be IP or domain name, such as `192.168.1.100:80`, `foo.com:80`, etc. Value is the weight of the node. In particular, when the weight value is `0`, it has a special meaning, which usually means that the upstream node is invalid and never wants to be selected.|
-|k8s_deployment_info |required if `nodes` not configured|fields: `namespace`、`deploy_name`、`service_name`、`port`、`backend_type`, `port` is number, `backend_type` is `pod` or `service`, others is string. |
+|nodes |required if `service_name` and `k8s_deployment_info` not configured|Hash table, the key of the internal element is the upstream machine address list, the format is `Address + Port`, where the address part can be IP or domain name, such as `192.168.1.100:80`, `foo.com:80`, etc. Value is the weight of the node. In particular, when the weight value is `0`, it has a special meaning, which usually means that the upstream node is invalid and never wants to be selected.|
+|service_name |required if `nodes` and `k8s_deployment_info` not configured |The name of the upstream service and used with the registry, refer to [Integration service discovery registry](discovery.md).|
+|k8s_deployment_info |required if `nodes` and `service_name` not configured|fields: `namespace`、`deploy_name`、`service_name`、`port`、`backend_type`, `port` is number, `backend_type` is `pod` or `service`, others is string. |
|hash_on |optional|This option is only valid if the `type` is `chash`. Supported types `vars`(Nginx variables), `header`(custom header), `cookie`, `consumer`, the default value is `vars`.|
|key |required|This option is only valid if the `type` is `chash`. Find the corresponding node `id` according to `hash_on` and `key`. When `hash_on` is set as `vars`, `key` is the required parameter, for now, it support nginx built-in variables like `uri, server_name, server_addr, request_uri, remote_port, remote_addr, query_string, host, hostname, arg_***`, `arg_***` is arguments in the request line, [Nginx variables list](http://nginx.org/en/docs/varindex.html). When `hash_on` is set as `header`, `key` is the required parameter, and `header name` is customized. When `hash_on` is set to `cookie`, `key` is the required parameter, and `cookie name` is customized. When `hash_on` is set to `consumer`, `key` does not need to be set. In this case, the `key` adopted by the hash algorithm is the `consumer_id` authenticated. If the specified `hash_on` and `key` can not fetch values, it will be fetch `remote_addr` by default.|
|checks |optional|Configure the parameters of the health check. For details, refer to [health-check](health-check.md).|
@@ -350,7 +351,7 @@ Here are some examples of configurations using different `hash_on` types:
Create a consumer object:
```shell
-curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d `
+curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
"username": "jack",
"plugins": {
@@ -358,7 +359,7 @@ curl http://127.0.0.1:9080/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f1
"key": "auth-jack"
}
}
-}`
+}'
```
Create route object and enable `key-auth` plugin authentication:
@@ -536,6 +537,35 @@ HTTP/1.1 503 Service Temporarily Unavailable
```
+Use the [consumer-restriction](zh-cn/plugins/consumer-restriction.md) plug-in to restrict the access of Jack to this API.
+
+# Add Jack to the blacklist
+$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
+{
+ "plugins": {
+ "key-auth": {},
+ "consumer-restriction": {
+ "blacklist": [
+ "jack"
+ ]
+ }
+ },
+ "upstream": {
+ "nodes": {
+ "127.0.0.1:1980": 1
+ },
+ "type": "roundrobin"
+ },
+ "uri": "/hello"
+}'
+
+# Repeated tests, all return 403; Jack is forbidden to access this API
+$ curl http://127.0.0.1:9080/hello -H 'apikey: auth-one' -I
+HTTP/1.1 403
+...
+
+```
+
[Back to top](#Table-of-contents)
## Global Rule
@@ -547,6 +577,7 @@ We can register a global [Plugin](#Plugin) with `GlobalRule`:
curl -X PUT \
https://{apisix_listen_address}/apisix/admin/global_rules/1 \
-H 'Content-Type: application/json' \
+ -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' \
-d '{
"plugins": {
"limit-count": {
diff --git a/doc/benchmark.md b/doc/benchmark.md
index eaf4e4cc6cdac..9c3c016d0fcff 100644
--- a/doc/benchmark.md
+++ b/doc/benchmark.md
@@ -17,7 +17,7 @@
#
-->
-[Chinese](benchmark-cn.md)
+[Chinese](zh-cn/benchmark.md)
### Benchmark Environments
@@ -35,13 +35,13 @@ and the response size was 1KB.
The x-axis means the size of CPU core, and the y-axis is QPS.
-
+
#### Latency
Note the y-axis latency in **microsecond(μs)** not millisecond.
-
+
#### Flame Graph
@@ -80,18 +80,18 @@ and the response size was 1KB.
The x-axis means the size of CPU core, and the y-axis is QPS.
-
+
#### Latency
Note the y-axis latency in **microsecond(μs)** not millisecond.
-
+
#### Flame Graph
The result of Flame Graph:
-![flamegraph-2](../doc/images/flamegraph-2.jpg)
+![flamegraph-2](./images/flamegraph-2.jpg)
And if you want to run the benchmark test in your machine, you should run another Nginx to listen 80 port.
diff --git a/doc/discovery.md b/doc/discovery.md
new file mode 100644
index 0000000000000..96d36e8e3663c
--- /dev/null
+++ b/doc/discovery.md
@@ -0,0 +1,244 @@
+
+[Chinese](zh-cn/discovery.md)
+
+# Integration service discovery registry
+
+* [**Summary**](#Summary)
+* [**How extend the discovery client?**](#how-extend-the-discovery-client)
+ * [**Basic steps**](#basic-steps)
+ * [**the example of Eureka**](#the-example-of-eureka)
+ * [**Implementation of eureka.lua**](#implementation-of-eurekalua)
+ * [**How convert Eureka's instance data to APISIX's node?**](#how-convert-eurekas-instance-data-to-apisixs-node)
+* [**Configuration for discovery client**](#configuration-for-discovery-client)
+ * [**Select discovery client**](#select-discovery-client)
+ * [**Configuration for Eureka**](#configuration-for-eureka)
+* [**Upstream setting**](#upstream-setting)
+
+## Summary
+
+When system traffic changes, the number of servers of the upstream service also increases or decreases, or the server needs to be replaced due to its hardware failure. If the gateway maintains upstream service information through configuration, the maintenance costs in the microservices architecture pattern are unpredictable. Furthermore, due to the untimely update of these information, will also bring a certain impact for the business, and the impact of human error operation can not be ignored. So it is very necessary for the gateway to automatically get the latest list of service instances through the service registry。As shown in the figure below:
+
+![](./images/discovery.png)
+
+1. When the service starts, it will report some of its information, such as the service name, IP, port and other information to the registry. The services communicate with the registry using a mechanism such as a heartbeat, and if the registry and the service are unable to communicate for a long time, the instance will be cancel.When the service goes offline, the registry will delete the instance information.
+2. The gateway gets service instance information from the registry in near-real time.
+3. When the user requests the service through the gateway, the gateway selects one instance from the registry for proxy.
+
+Common registries: Eureka, Etcd, Consul, Zookeeper, Nacos etc.
+
+## How extend the discovery client?
+
+### Basic steps
+
+It is very easy for APISIX to extend the discovery client. , the basic steps are as follows
+
+1. Add the implementation of registry client in the 'apisix/discovery/' directory;
+
+2. Implement the `_M. init_worker()` function for initialization and the `_M. nodes(service_name)` function for obtaining the list of service instance nodes;
+
+3. Convert the registry data into data in APISIX;
+
+
+### the example of Eureka
+
+#### Implementation of eureka.lua
+
+First, add [`eureka.lua`](../apisix/discovery/eureka.lua) in the `apisix/discovery/` directory;
+
+Then implement the `_M.init_worker()` function for initialization and the `_M.nodes(service_name)` function for obtaining the list of service instance nodes in ` eureka.lua`:
+
+ ```lua
+ local _M = {
+ version = 1.0,
+ }
+
+
+ function _M.nodes(service_name)
+ ... ...
+ end
+
+
+ function _M.init_worker()
+ ... ...
+ end
+
+
+ return _M
+ ```
+
+#### How convert Eureka's instance data to APISIX's node?
+
+Here's an example of Eureka's data:
+
+```json
+{
+ "applications": {
+ "application": [
+ {
+ "name": "USER-SERVICE", # service name
+ "instance": [
+ {
+ "instanceId": "192.168.1.100:8761",
+ "hostName": "192.168.1.100",
+ "app": "USER-SERVICE", # service name
+ "ipAddr": "192.168.1.100", # IP address
+ "status": "UP",
+ "overriddenStatus": "UNKNOWN",
+ "port": {
+ "$": 8761,
+ "@enabled": "true"
+ },
+ "securePort": {
+ "$": 443,
+ "@enabled": "false"
+ },
+ "metadata": {
+ "management.port": "8761",
+ "weight": 100 # Setting by 'eureka.instance.metadata-map.weight' of the spring boot application
+ },
+ "homePageUrl": "http://192.168.1.100:8761/",
+ "statusPageUrl": "http://192.168.1.100:8761/actuator/info",
+ "healthCheckUrl": "http://192.168.1.100:8761/actuator/health",
+ ... ...
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+Deal with the Eureka's instance data need the following steps :
+
+1. select the UP instance. When the value of `overriddenStatus` is "UP" or the value of `overriddenStatus` is "UNKNOWN" and the value of `status` is "UP".
+2. Host. The `ipAddr` is the IP address of instance; and must be IPv4 or IPv6.
+3. Port. If the value of `port["@enabled"]` is equal to "true", using the value of `port["\$"]`, If the value of `securePort["@enabled"]` is equal to "true", using the value of `securePort["\$"]`.
+4. Weight. `local weight = metadata.weight or local_conf.eureka.weight or 100`
+
+The result of this example is as follows:
+
+```json
+[
+ {
+ "host" : "192.168.1.100",
+ "port" : 8761,
+ "weight" : 100,
+ "metadata" : {
+ "management.port": "8761",
+ }
+ }
+]
+```
+
+## Configuration for discovery client
+
+### Select discovery client
+
+Add the following configuration to `conf/config.yaml` and select one discovery client type which you want:
+
+```yaml
+apisix:
+ discovery: eureka
+```
+
+This name should be consistent with the file name of the implementation registry in the `apisix/discovery/` directory.
+
+The supported discovery client: Eureka.
+
+### Configuration for Eureka
+
+Add following configuration in `conf/config.yaml` :
+
+```yaml
+eureka:
+ host: # it's possible to define multiple eureka hosts addresses of the same eureka cluster.
+ - "http://${usename}:${passowrd}@${eureka_host1}:${eureka_port1}"
+ - "http://${usename}:${passowrd}@${eureka_host2}:${eureka_port2}"
+ prefix: "/eureka/"
+ fetch_interval: 30 # 30s
+ weight: 100 # default weight for node
+ timeout:
+ connect: 2000 # 2000ms
+ send: 2000 # 2000ms
+ read: 5000 # 5000ms
+```
+
+
+## Upstream setting
+
+Here is an example of routing a request with a uri of "/user/*" to a service which named "user-service" in the registry :
+
+```shell
+$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
+{
+ "uri": "/user/*",
+ "upstream": {
+ "service_name": "USER-SERVICE",
+ "type": "roundrobin"
+ }
+}'
+
+HTTP/1.1 201 Created
+Date: Sat, 31 Aug 2019 01:17:15 GMT
+Content-Type: text/plain
+Transfer-Encoding: chunked
+Connection: keep-alive
+Server: APISIX web server
+
+{"node":{"value":{"uri":"\/user\/*","upstream": {"service_name": "USER-SERVICE", "type": "roundrobin"}},"createdIndex":61925,"key":"\/apisix\/routes\/1","modifiedIndex":61925},"action":"create"}
+```
+
+Because the upstream interface URL may have conflict, usually in the gateway by prefix to distinguish:
+
+```shell
+$ curl http://127.0.0.1:9080/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
+{
+ "uri": "/a/*",
+ "plugins": {
+ "proxy-rewrite" : {
+ regex_uri: ["^/a/(.*)", "/${1}"]
+ }
+ }
+ "upstream": {
+ "service_name": "A-SERVICE",
+ "type": "roundrobin"
+ }
+}'
+
+$ curl http://127.0.0.1:9080/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
+{
+ "uri": "/b/*",
+ "plugins": {
+ "proxy-rewrite" : {
+ regex_uri: ["^/b/(.*)", "/${1}"]
+ }
+ }
+ "upstream": {
+ "service_name": "B-SERVICE",
+ "type": "roundrobin"
+ }
+}'
+```
+
+Suppose both A-SERVICE and B-SERVICE provide a `/test` API. The above configuration allows access to A-SERVICE's `/test` API through `/a/test` and B-SERVICE's `/test` API through `/b/test`.
+
+**Notice**:When configuring `upstream.service_name`, `upstream.nodes` will no longer take effect, but will be replaced by 'nodes' obtained from the registry.
+
+
diff --git a/doc/getting-started.md b/doc/getting-started.md
index a576329f155a5..ae432d4643422 100644
--- a/doc/getting-started.md
+++ b/doc/getting-started.md
@@ -17,7 +17,7 @@
#
-->
-[Chinese](getting-started-cn.md)
+[Chinese](zh-cn/getting-started.md)
# Quick Start Guide
diff --git a/doc/grpc-proxy.md b/doc/grpc-proxy.md
index 22e3297340f00..50404aef49d88 100644
--- a/doc/grpc-proxy.md
+++ b/doc/grpc-proxy.md
@@ -17,7 +17,7 @@
#
-->
-[中文](grpc-proxy-cn.md)
+[中文](zh-cn/grpc-proxy.md)
# grpc-proxy
diff --git a/doc/health-check.md b/doc/health-check.md
index b0b062deb175a..1b78831e95700 100644
--- a/doc/health-check.md
+++ b/doc/health-check.md
@@ -16,7 +16,8 @@
# limitations under the License.
#
-->
-## Health Checks for Upstream
+
+# Health Checks for Upstream
Health Check of APISIX is based on [lua-resty-healthcheck](https://github.com/Kong/lua-resty-healthcheck),
you can use it for upstream.
@@ -77,26 +78,26 @@ contains: `active` or `passive`.
* `active`: To enable active health checks, you need to specify the configuration items under `checks.active` in the Upstream object configuration.
- * `active.http_path`: The HTTP GET request path used to detect if the upstream is healthy.
- * `active.host`: The HTTP request host used to detect if the upstream is healthy.
+ * `active.http_path`: The HTTP GET request path used to detect if the upstream is healthy.
+ * `active.host`: The HTTP request host used to detect if the upstream is healthy.
- The threshold fields of `healthy` are:
- * `active.healthy.interval`: Interval between health checks for healthy targets (in seconds), the minimum is 1.
- * `active.healthy.successes`: The number of success times to determine the target is healthy, the minimum is 1.
+ The threshold fields of `healthy` are:
+ * `active.healthy.interval`: Interval between health checks for healthy targets (in seconds), the minimum is 1.
+ * `active.healthy.successes`: The number of success times to determine the target is healthy, the minimum is 1.
- The threshold fields of `unhealthy` are:
- * `active.unhealthy.interval`: Interval between health checks for unhealthy targets (in seconds), the minimum is 1.
- * `active.unhealthy.http_failures`: The number of http failures times to determine the target is unhealthy, the minimum is 1.
- * `active.req_headers`: Additional request headers. Array format, so you can fill in multiple headers.
+ The threshold fields of `unhealthy` are:
+ * `active.unhealthy.interval`: Interval between health checks for unhealthy targets (in seconds), the minimum is 1.
+ * `active.unhealthy.http_failures`: The number of http failures times to determine the target is unhealthy, the minimum is 1.
+ * `active.req_headers`: Additional request headers. Array format, so you can fill in multiple headers.
* `passive`: To enable passive health checks, you need to specify the configuration items under `checks.passive` in the Upstream object configuration.
- The threshold fields of `healthy` are:
- * `passive.healthy.http_statuses`: If the current response code is equal to any of these, set the upstream node to the `healthy` state. Otherwise ignore this request.
- * `passive.healthy.successes`: Number of successes in proxied traffic (as defined by `passive.healthy.http_statuses`) to consider a target healthy, as observed by passive health checks.
+ The threshold fields of `healthy` are:
+ * `passive.healthy.http_statuses`: If the current response code is equal to any of these, set the upstream node to the `healthy` state. Otherwise ignore this request.
+ * `passive.healthy.successes`: Number of successes in proxied traffic (as defined by `passive.healthy.http_statuses`) to consider a target healthy, as observed by passive health checks.
- The threshold fields of `unhealthy` are:
- * `passive.unhealthy.http_statuses`: If the current response code is equal to any of these, set the upstream node to the `unhealthy` state. Otherwise ignore this request.
- * `passive.unhealthy.tcp_failures`: Number of TCP failures in proxied traffic to consider a target unhealthy, as observed by passive health checks.
- * `passive.unhealthy.timeouts`: Number of timeouts in proxied traffic to consider a target unhealthy, as observed by passive health checks.
- * `passive.unhealthy.http_failures`: Number of HTTP failures in proxied traffic (as defined by `passive.unhealthy.http_statuses`) to consider a target unhealthy, as observed by passive health checks.
+ The threshold fields of `unhealthy` are:
+ * `passive.unhealthy.http_statuses`: If the current response code is equal to any of these, set the upstream node to the `unhealthy` state. Otherwise ignore this request.
+ * `passive.unhealthy.tcp_failures`: Number of TCP failures in proxied traffic to consider a target unhealthy, as observed by passive health checks.
+ * `passive.unhealthy.timeouts`: Number of timeouts in proxied traffic to consider a target unhealthy, as observed by passive health checks.
+ * `passive.unhealthy.http_failures`: Number of HTTP failures in proxied traffic (as defined by `passive.unhealthy.http_statuses`) to consider a target unhealthy, as observed by passive health checks.
diff --git a/doc/how-to-build.md b/doc/how-to-build.md
index e1b8d8b672ad9..b1a276eba37f9 100644
--- a/doc/how-to-build.md
+++ b/doc/how-to-build.md
@@ -34,21 +34,21 @@ You can install Apache APISIX in a variety of ways, including source code packag
You need to download the Apache source release first:
```shell
-wget http://www.apache.org/dist/incubator/apisix/1.2/apache-apisix-1.2-incubating-src.tar.gz
-tar zxvf apache-apisix-1.2-incubating-src.tar.gz
+wget http://www.apache.org/dist/incubator/apisix/1.4/apache-apisix-1.4-incubating-src.tar.gz
+tar zxvf apache-apisix-1.4-incubating-src.tar.gz
```
Install the Lua libraries that the runtime depends on:
```shell
-cd apache-apisix-1.2-incubating
+cd apache-apisix-1.4-incubating
make deps
```
### Installation via RPM package (CentOS 7)
```shell
-sudo yum install -y https://github.com/apache/incubator-apisix/releases/download/1.2/apisix-1.2-0.el7.noarch.rpm
+sudo yum install -y https://github.com/apache/incubator-apisix/releases/download/1.4/apisix-1.4-0.el7.noarch.rpm
```
### Installation via Luarocks (macOS not supported)
@@ -64,11 +64,11 @@ sudo sh -c "$(curl -fsSL https://raw.githubusercontent.com/apache/incubator-apis
> Install the specified version via Luarocks:
```shell
-# Install version 1.2
-sudo luarocks install --lua-dir=/path/openresty/luajit apisix 1.2
+# Install version 1.4
+sudo luarocks install --lua-dir=/path/openresty/luajit apisix 1.4
# old luarocks not support the `lua-dir` parameter, you can remove this option
-sudo luarocks install apisix 1.2
+sudo luarocks install apisix 1.4
```
## 3. Manage (start/stop) APISIX Server
diff --git a/doc/https.md b/doc/https.md
index 5e7aa0edba627..c2091a72db475 100644
--- a/doc/https.md
+++ b/doc/https.md
@@ -17,7 +17,7 @@
#
-->
-[Chinese](https-cn.md)
+[Chinese](zh-cn/https.md)
### HTTPS
diff --git a/doc/images/apache.png b/doc/images/apache.png
new file mode 100644
index 0000000000000000000000000000000000000000..d0075db9e3691081e94babfcdc8a5f6bf8bacf8f
GIT binary patch
literal 8491
zcmXY11yodBv>t};?hpo~OS-!T7(lwaq(P)(=oCas5a}9_5JZ|8N>WNm=@bF!9^lRY
z-dlH_d+yqI*V=cVv%kIWNz&6%C&Z(}0{{SoU=0<0)QtFV;9#Nd`pX{Js0qtnOI-!<
z^xsw7U6qNV;QDBoy#@eCMgALTWc!b^C?d8$7@~^30mLN%((nXvi~s=40I-UpVbIdi
zr@#Wko!KGy-o#;(|DbJBQW~DtmqC@}@2i~{(QG|`=~KT9(&2-OIYITNZw6eWuw2Yd}`l&*jkl-UG;CgX{>MJHe4?3YPJ1(g8#Z;?bT#(
zp}HVV*yr2XG;DN}vcoz&&T5`&Ab<6kbdBZWeQDso1KuZ>7hmYZcAmI=7^lz`tcn2K
z`wqu0O4*&EHWw3SY5-*yy)bNvXKh@_;L*ReH}d`E0K-Z)%lhTb8K(}erzb{(sAS_u
zxAok`(W0hQAShof)9y7)I~f7qMI-pgHn&2X(pLXk{G_Z)+;!paeXzm;ew%d!bPVx>
z-{RYV4Ad^O_wVn(_l`nBRz?wqzeMm74TklTuS@#NsDQL$&vyc$zoYO~H3^7gmjyLM
zBo42gtl%%z0R8NCw^8m;VNIfUqVs_^NkE~FObGINnfnvmx
z!hU$Tu7O(9dnY^j|6UqCGD16{g-$X#7;;!0V9xzPuD^ouG-w3$AJ48an9zHs{l0y)
zyIn)S+r7kfj8m+B$~6B^uD==K#kZ10iNyyLLkH0!apII>`&ihr={-8RdWx&}HTL>D
zs|d1~ezzNl<}vVR=ogVG(1(>RaO`OoUn*)-aA({lgpZ@sh~fGF{?3@`VIQ`YJHuHa
z-r0p7hD*T9z=FGI&4Lp{Qj-dZCmK@UatlOfB8l3~)`2vc&1Ih~&`rf5TJi6&(G-7b
zHkeVDEaKSyaot%X3*z^vm=FJ(SkqO!~N`$
zVZjPgtLlDtpo?%;Xo>PkQ3@7A@#uw8l+L+ZQrizn>vgSlqxBR0$aD(kxS)+GdsL)i
zw)*k03G{YY47+d->)$~z&|HpOHj^`bccj(@i!DaqwZ7wWED<|JA2;ut
zeW4JVC%H%k!qK;goWGT|!rYspHF0-umeuxWklCM8uBPa+I!~}V<9Axm^XD(5fnLH%
zDrM_p$!v70E6${rAe?hl+z9cri>m#;O8n)K+n+Y4d|K{gClZO+FjVh&PAx_1Tyti~
zEuaDMC+-VI%XJiDX@jhbDL0MudIF6~Y0uH3eKc?TP`PD$Gp~!y1RI^ijP$AxdXb7j
z`fh8lHm#~6AHziS%dULJL7CO%UJaWb!<(0uY#+92!M~)5xI!dWCyO}Xn4ez&X~b|=
z<=xiCM)}Q}zy3Nsu!XD3SHHJhWZ%4TFut8Ey$$
zqO{Xz_QhSv{JZb{Net1A
z%`Jd2xlNeIydq@CMC!GawN^`NYfycIY%?bNp{b*{>;J
z=a;1b8zn^hMjH-OfOOpmN}NE*K$NvSRMK^pw!_?t5j*8$SF5SK;Q1qi@Y{6`1XK{y
zG|Dgk?iw51EUvT86A~bb8_6}0lcd)QgIlm7)Y9UorPY0cEeEwc2Timg^as2kS0C$R
z_I5c?li+L*2~i$U^_nZYhW*vtWuJ;E_Ob(p;MWEyXk5>!3L)iAa*g3&V}|>s2CA%N
z$+)M|W}a=z-IbBL%$y~WPz(N>pj>jm%|(nxmr5!F4D&+C(yvO|O
zK}?)~vE3lZ)S%@pMZLDMdf?D=tGd$`6MdFcc}Z1eFmHiQET7)@8Z9X=JSBFxFTX=0
zZ#KYP>f)6CkBEjCgFx6>gUza$b#pNEEs0k8+9ghG)$=vp&kXMMEbbFp?sGm0*M+>j
z(~o(DG4J+<9cG2Qz)=UQYiGOPTKN~de{P#rdeKwoFf&~Wglnqgb+)>GTlmz|QO+J?Dx
zz9OzW#Af00$m!3c=`Kqy;Orh4Km1%NSH!pLHRQ*WmPs(%*Ku(5*T1p%iOAT+`w42w
zT{<*^Bl2`^bi>;`2?c%q=M#nnbcPWsN_#N3i4nGKmqf4Zd`2pn-96xYw$}tK_tG(B
zFnM+o!!#1bOc7AXkhr%^*94BOYt=w*f-uFD$pd0Df%^l?7+>|DIFKP%LH0H654=33
zaFx6@E-vB>WeU@aA2xiC$}XWhVq6gpxRFFF4{An>k}WS<=nn^I4rdQ^Lv)=Fm(#et
zuR<^CEJQ|rw0YFmi{r9y(Ly4rd&e76kF}5S3s%WZqpg@@)?S-eU3652Fs7?wAIlMJ
zm=(7=US22EzQ{9T#MDSUabTcI>0d5*5gj(U&4@n?W3LeHM^ijr{WqTyF+;o79jCyU
zdWQ}-dRx0i8j{Js=u-a=5WMsOQ-!*&RukF7P8;5zaz6;?s5Q?TeSthcdwC}ME6Ldd
z9Xdu`BUIAd6UnNOpq&0Pd&?lkXfftxuZ&HC;rski6Gq7vEd&2@sl$S4c&2wFuLSXE
z@%91h=w|}18l2XFf3T4tPdV!A<>&Sx$HG2rAT{
z)CAZLvrNJo%P%j~`joJF<9Nq(+<~E0Tpj?OM0TH*wD}phZ>AQZjhPgsf`^XQO}WMnhRvK?(9x||(9
zvi$Ac2Bf3SwR~rI!u@JQQ}bAFjRIo{Qkrl-?L@+-7=B-B(H*Qmpg1t{JDm34;H^^Z
zr{{gIa14aFFdl%8&ehynhB0x-jVo+XB3eWVjZ4R^a;Iwf!ctwMK8}*xYc7*Ot
z1;31Gj#nb9Lw0Kc_-JE?q(8=a73Tz-JH-Ep9A6$L(*6N^!DoLiFskf~=8>T7Ehrx+
z;W@A#>7Lu}J&xEh&Z>^j-2SX4Ccu*8k3&iH7Fb)9MJ7fA!U|ot`SI_q(B?gH#;7d
z`*}Z?7?^nRvmwIRgF>&-j3TIE)3Wp3Yffj;-r;+xe_hWc<8pm(?ND(Ih7v-9r3vaV
ztttN)+|*k(Fq4uQ$u4t_3MT#S#KDiMYt||V&Q)iTeRy$D7}A=Ql$4!ON1_9~`i@H#
zWkABT`0#dK-Ga=+l?R28Uqj{0e{E_Ik&1vt7CCh#R~+TM{ABv(_VeDsf^F(6sd~=p
zJN^o=l>EKhF*vP@R3e{IW?+KEsohw!iMCruIv6o4TWr!J($5z|UQM&OJNSKCW<08t
z72z`B%qLLue^4>@m6a~4zt(cQR3eR3;FEQc0D4r>)!>0cF2zY7>xRQnAmga`7uw3%
ztIlsUF{5WJV|zaeCf`Xbc4U!u#_vrAmHQ}>J1$>zL2ZF=;h=$^63EsW6-P;h&^Y-o
zCCVe#N-Cpd122OjBICZSe);Y
z^uDSviRUf47V(jqsvR69#Z0c8_;(p)>}&YXlO@k=PM&jsXFamt4O)aTs*6h{g+B?zr%-cK
z+~H^tcb#U%xOPmV3^Rs}dSkdg3!~m%Z^{x5FTveuRnDE%0*dEWPR)R5IhmL~S&Z}v
z0l&)ra~sj5CbGS`p1SG6-W!Zh!w$)+m{J86gC@IoLJN}PvK4=myteO;AUM=BeIY7<
ztK=qaXYJuiA<1*njSo=vXpZmhGy{FdqPR>llM_F(V!7rj4$xqt-|EYu;MDzaIEeczz(gn}Go92ssw?nZ5XkIM^FUU(ebDx>&+#@M5Q>PKI~B4%)GO})G5
z=;!)dmovy59y|t-94BNae6~-ecve7C^P6x8qC`1o^63|w3x+G}uwwEdtd0lL!F)}W
zGHHR{UfbBBfx&mH?-MfjZs*)3Z#)&?iKXgAM6qTd^Xi2mF4vw5LiV*o{}nL4P8Zx9
zrv)sKalGwXyCL@{P=j6$tvQ4i0S-SHBcV8PTE%AUw^4Sd?L`}qd@j)_KJmw3lMN^;
zYgVMa6H5A{%rh;cw8!FF84A>~IVwH;zeK^xiK72C?^I|?Hf%%@1Z
z&MiJn@)JGLT4!BXM?6%bcGo$2fx)Xfs*4C)LzY^bW%@?Cuk!B@wZQOhnO+KycnfqH
zV%`CZ7*~xyo0be$Om`D9^K7n6w&AC5RC}mRpZp$gd#9B9w_ZP$4dbGE;ZMBoFVo)%
zSMPE%R0+ONzU@vagWSFacpBZdOPB_>>n#{;2ep8R?`(`r)cPzngAA2jjZOcwZ^eJ#
z3%7kQ4E!waJalz{d1jDo7?I4#)8v1AQ#%PK`+d)#+JU|AtEFO^?%ekAWA!+$0AFwZ
zhPl(jC-?bt9`@VhU`03K#?*Mpk&oir=UDn#78GNL19WL>K_T8B_M1jtrv
znAbBW2st5l3ctR$IP5IL_=kR^uhgeqJ6e1)(0H@#0CA%Nwm2O|%4if#1;gypburuj
z>@3Ko^|{9Jmy~}98(O?orY)KFtVE*txSR)xYF*XI;vjs~ag~shaL@
z@#EEg03G57_a;P=@~~aq58xG`A*$x7+jN7%^^Av2fk>E7J;-P$IIWMVPaPP9wqU*i
zll)@musaYB5rUya#UZ_*#KY`cn!2+0-YaPLQiw@ZCKg3GJfu
z&qHh>+!3<~99s+94G`1=BzTtVSqj7Y{b6j15GG3mT1l>dxd9sW_Wj
zkXXsJ`H
zGX6!@W|bvTrF7(xC|H(JK`;>1(V
z67I5pFER_082OV^0WdC(VEzVWkvE>0C+g&8!Hu_oxVp$}&@pZYBmiJ2!K}$B0F1|9
zqV&c+Ysfy){?j=oi!}vjAbC1aWS2xa52SVxcvl#u%HQ%d^dx>hd?USow^{%Bhwh?
z$uA^4Sq{zCq6FnV=-g*e1+FrPP3-SMvd3(4>?N+MoWTM(gQZTRP^$(fZnW5dI7R-c
z9z2dSHFx?{4VukS8vTuf3|2wv;tCLO%OCo3xl;teK=U8J@?y9V0~vIq8q0p7mM%r9sF9b6I+_yyyKr9oiCM3`NbTzO1mUxp`HZ1Zm
z!2%8YTfj-J!oqn52>rj>LO+L&vqqF9n8fN{UQ6|ImFv+#oqMLU#Ysn;1Lm9t!cI$_
z{QBkr#VCV)BL?Iy27P|hvAJ7>u~t1Cb3GJcO2|;J@%2W5_BlZ;+KHATF%xB^{}*t-
zA5?7cO$&9==M!(+tDAUwKezp#cQzE5b#qe`gKAZyul#CP_?P}TdIp^q6c^RB8f@Fe
zi4UH*8ujf`+i+{sF-C5GJ6ZD|)7dk^t=WU0r^Lcw!O+X-13sEJ&wb9nE~SLH*rX#6
zC&pz_GxdoV7%Ive5VD}%$j$2pqivE%_tFAE*n$v;S+}e%7eh`Rvh_;7RoW+~c(>#&
z;2?dB-pdqSh<4lGXIHzn+fji^5o;IU>OGmSv80vN3|_ZVM%BZ4((To`-)jwpo-MU^
zsH*=Jp=cPS<-AzaRPXrmRoFcH--j)|ECj)NyEB
zQ&&pQjB)=TThfg+bY3@YtrL0uB{KjME9-?FyM#6RS*TCw+a&o5#D|+_bC{8R7ZKFM
zkO%$+LhtA8)cmGb=i(4qRQ1PG)WwuDUeMm~o}aV!i*ER^HlQ0}B$T~0n9-v1)`k>h
zdgiDUWwrh~Eqv?=25LnMePVuG=aNs9FR)wm{u>x!gIx*q=6vsOR|?yWpGm^&|Fxu*
z4k)~OI=6{p!Y<$u6e`Pactx4+_FgVYV2z&s$@8#qfh#;J4rr0xSh%+rZO!sWdwq+1
zi+G!DUHk%mB9rC2`tLL!W1`D-kuUJ0n>l55_X(LGzJzHB$8cTrm7x-Ser3xuL)RQm
z8`Y5IKeg9(#UEYxGU}5>@IIqkRSYvIJ{_0egf|OkKiK>FcA|ewY0bN&Pz_DNSOYZ+
zSO+x!*XkinipYp+yMjW#5BD)_fhvs-6PgK8whD_h+SEqN*+NAUP%_hG*t*v#SIo
zP@l=)Y@!mvdrK&`(QPKdS&
z-~-NK;DpwQDba!N)C+E%K*Iyo76FiJO}IJ$0{8*MZD=h2EB_3wMz5yd>BnbT>&NGf
zYtOa3o4UJsaaV2IxQ$LvGuME^aLc&8@8ZjiVo8U`$v?&SRWw4%2FU0(-U6*CL%1)n
z76}}M1;$IUdQT&;7iTMVqsaev=}~0n{93_3GizJqtH!{ricocVev%p~CkM$NuX*|B
zF;zsOnXr$;;x15(GqufsA0qIPn+{D(f}$Sx^A~!3eQGdQE55h&s$lryEIR*hv>P4g
zJmKL?bYi7Bgh|hL3B~Sy7s{(=try=-vACd0V?_3&rmz5L~
zaTb${Cb&&8&r-(dyd*Z8D#Rm1>aI4uD&y6lX?>b;xvx9L4XDC)2A;o(%$EtWZ|NJR
z;tSQ$$xx0Rq9CGO6MT??y1NTVQVIfT(4??T`18KuW!&bC=9jFu0;><0qKBze|KZQ3
zo{`y+3R>D->=6FkyBmF?N6@bM^t)8XGbSV#k--{CT&Xf@e!Vm9z!T|*ACbh>lX14h
zcgFQ~c9q+gtRq=%o1`YXOP8!{hhQzaOt{+mi9CL3u;O57z+kLm=3T4Ln7-;D&ms|*>~B5>pihgMeUSxf=jD8o9TZBp`bsrAC^?*s82qLtAoh(!+}
z$AW~Q2jSct{ZciM8`X^6Qb55-N`>vkk(352*;nha$2f-!oNvC%M%Cpu?%~K`POXUk
z<^X{F#$%tqc@n%727Z;ULyQ$`y)@?bAY5YA3O0$B+pGQ8
zPFF*^GjW55r_dA4LwoHMuij)ll|i2SYmqF09+$h4(!Q@4EDL=dbsALGyc%gO`!+lE
z4aC1UYtR7xW~1H&e60dXicjcdZnoiEt3T@Xr2BVihK0OIem>vr=;k7+pdJ)$1a8wQ
zG2YPWJMDV*J@(fPKh!H}U<5CE+fty3Ky$7Q8W@oP>il>{k0wPi&zE1Cd~QQcTQz)5
zK{unx!xaZ94ZKM&fJhXKfRw(w3QF>H*69Z^W;}2R;r->+deSouTjk;OcBwevj;`o8jyxq253?-}X%jDs}SVYO5?(qo1w
zWU)ab9&~oX-llG=u2NRBK^LL6@@$>TW}4X8znQsyvhMmmsj<4|%w+x5=`KSgZ{u#w
z;QW&QZVpBuFF$SAY!2`z)By7V=LQ%=7-TR-tY?gW0bs;Tr831Mc%2~%BnHGNJYg8(
zvPH44<15NztzuI_tV9OY{C=Q>Y~o%vJ%0IO(V*bTTmK=WT8ydO`(Izcar?V4er}
zT9s^GU=pDp5uMc@{B7u%H5`;du*akX^xOfWo{Gbm8v{ZfD9zbqmViN8w!%~74YhE$
z-L~FLMq6H=Dg@os%lTQG%K}&mY4OjEnchsefRDh5q+nZp#91}3(u(`4d7J$Z
zt|*jo$-Zauuvu#Fh|u?*N}d?i3OMgYZyz>^4S%J#fMQD
zg`M>-t4brD5P$|o!^RI=rsDVZks(os$Kqm7%Z3OUg^K6!7{yl1-KWhu+@gFu8Z_<{
zK%eC5%!Fx5a|v0dEp1(+c^v%14I!l}v^lWu-X>CN%j4ZsK8ipIApPJOA*L#)W`b*M^o*jk;k~3PFY{>i>0VNn(
zq%r`O;GWDM{?8#+tn}aTB$c;i8D>D6|0A(O3K$1_d&6(@?mA}K{gekuY=e$;J4G{>
z#bN*46s^U4>A8~uVD)QN@@~fM6jwHTVK_myDwG{^xV7$|!kpf9b|o;E*G5RG42^1W
z2o-r{zm_P$_D&k|D_-*)5KgheQ8HqbLhp=RQSsDi>LWuP5BU7wS%Be$iDf)r&Q(pX
zz`PHH*&5KF$Rc6w@%UsA0nnRPLjp9cxwN-vFV@a}ujiI*${+%xtP$8KYa$;K|KB6VBEUh*ER|iiWO*SaV_pz+>5)rwn%V_Ymm~G;!bdvg&VlBcp5W|^dbh==ViVe
z2-gPzWs8#OH9Aeg^Oxcs6GCZt_4d1?~1ylIP
zzVAk>=VJOv)XI0!>O0lXw!pEHTj)+
zM=hki9H#Ahl3}!8kAA*WA%We$Qz3dM?B7WzIErJ=iog{KP5g#9ti&P091Z$%m2+io
zPrn}CNoYQ_{OhI56`{HFI}IJ3a}FqHz1&rJRxRX8N&RxoW$cx36X?b3n{bObCCB>*
zZ_)?DYA)CA_sQ>q=@OpGzUO`^RMSi}lY@a@sDwjd{vx2$|I=r12d8;|{1t5!XVwly
ziC&t3h<${nbP(z1EK-$ILQD%pb#s=1D6LL~#Otc=WM;DHw6qs_!k{;FdY+o4fg9^eadv6NqYnJmnQu+OZv|3`}>3Wqg#~s=Z&iR
zEmVAM?P6y0bTaSQdylj`<1Aztc2QC!Fj`sI6{>hqn`2h(s9yn+2eCp4&C!A>%_N>#
zyv3Fa3U}r(9G>-F`XfdG+`HjgGf^Po>q6cHU8Ib$Aqg7Wqzr{g#_Fc?@Oi)k8N6>@-oSq=04$XXIx3lV~Cd2tkSHkpnQ|WHZ72fITou2drjPhN?*xDrGoN+Hj8qJ
zQj8i$LyPg-D^CUfJwJWqG^#wHg)bK8(RyJS74X@x;o?g=~2`8YD#bZ
zfX(QwtOTOI`aV+2wqsnFEsPFE(pRa?!1RsDK5>Q#g$a)-M605>MWdi7wfMT&MVp{h
zr<7cKPlHU&ylAHMbE#!
z)#pljB02AVM50sF(b-;Eqt}Mj5!X4|0Y|KCP^|&hAX`2=vC)nly{-P?l)@V2jr76N
zP~w*&bjNh~s>1w7ZEPiW1ti7I;te_%Vf%rZZ6-09Qcg9CM6J5lNy`+m)F3j
z&z`V9iIo6%$OjX!DC4Ik_Lf*=Se}t=6F((cC*J4SS$*`3&v+y%`w%tGA1yDST63C+9Y
z3q2KNbGCQ(9uSk(}d_9mA^MpwI7v4JGOWgmuCqvvu@E9r+=E74T_)ax?0g2DXM?`6&+3S-Y>F{$VB2-RJ4&t-sL
z&m&C;?kKyc==i>|>d^8s*0Y_Prw176rwjYL+~!_F(FM_SKP5?Nytn^d(HWG2$jvvn
ztL&n{q|n3-^_#raCjjj952geCqWVbp(U^p-UyEp17&(^WpmZG@59IsG0i<*D4oD
zO^<%BzU;5<-_MR_?C*B3W-@T?=L_6_QaM0p5F~u
z%{rg0Pb`IB{Q@cM!gr&sVx=5>ns*nSD283GY&=&r=)F*X)_L~(%u}IN@gku(5ijA)
zKDj=pI9=Ps=-z)dJ(7U=o0k2O)IvhDya|!cO!bV#$@PF~=_l=mQeY`b!=|HtD{NKg
zev)L)ub#(#YR;tk4yu})wZD%xJECQ-qp7Y~Hc$#|;`RXC3{{9t?TyZ8SMSU@9ywlR
z_J{y>jJCJWxA%>8qbD~eFZL8?J%Nyi`=qC7U&tc+yW*aFpuA`gH;oGbyKJwMiq4uLD;ZT^=Tl35HE}+*(ETi0n_^11Z6EneY9%x$
z9)%uNMP8*kUYVA@2$y9Co7lGP
z`hmB?#uBM@Gb+fpKI~gJt-F-Nn&!?X2a7Z49le(LfnyKrSBB|^ASYGGf;av8;6g<{
z+m-SC!o~&HK`S`RGvKgeG~`pruV^g_ZQ)6v(McN2DbfXSb>#T4ao8DZIII)^cln
zelxZBqwus4Z|GU(govk?e;NDtgX{1Vb{|s(6YMsoYstN6)=E|riu!vH;b9?61P#XahS8jyj@g_X
zr#iiA90V0u^!f46+~_;q{(OXOQcpe@-EHaZ6QBWZzDZ780lc`ix73rjQdUM_f*+$H
zAR`hZpumq1;SW(nl7Eh65E&4j{COV<0U^W&0r~H9RN$|Ve=+dK<7fVQMao9{&l%`)
z*-!p+jGX&;>dhXLJ^TgzgPguA0s=nm;{#D%gXRzcK^#F|NnV#EL9QJWwlqGQKZ`o?=J4szlMGN`n5gujdX#0RiK>g^ZX0SPuQ=Y
zkG{O;cADpFem7Qg$9IE%vwt%v1QUW<)!ejj`|E1s;v%6@1|s~sr{&K`_a@WyO`rbd
z`H$aAh=eS6Bqff3i2VHDJyqgSiuc)tBt^acx0e-&fT$si`Y*p8ei`j{C}=UqvpEIk
z|9fv<-C^%9Vu9~vMc<+pfPZxQ~xoqb&DbIM|9+d0TXX^u7%RyP1i$=yYrsuG
z`tPjp$6EitDgK>3{=aDoaZ2R>7yrJjZjYswwfGf=SM9tiGXn{Lq>BZZXlQ8AaB*pr
z=#(=lr|~9|IJKzzobNKKtE;yuNiV(lkN-r!I&3b9ldVseq5j_kRZS|WzhCL<7|L#W
zdwEpte|L?Ph>MR;cXx9EQqPl-565T4aKlddZ~Pjuiw7p`bO+-9@4&>bZnn_metfwa
z4LyK7nB*JW-QHRZXG?H-?CLR^w)w^OEz|rLbT-hRQ^JJpY~TK`CjF2TXrc#;#W^YhnIhFre8?@S(_j;V&51c)}c?avE%Uo3dSIzwNAEvsfsbKatOpU(ga4C-uo
zZ(K#QA0ott0`4C+{EWX}Op};CV3P^b$VQVV@;c1QrEpp9x~BpEcpfc7Oi~ivYIig!
zL&y`qX#DW-gZFM#_gd3V1(f^Y7zVXmZ1IkZjg19}0@WZ(ZCC4Qu$&O`K5i!^rHGNq
z#>PhB6MgK-5^Y792*TF|ilF-4ruEEw?t9^ftA~Tq%7AMW=k6#nm!sd(Hgommn^T^T
z*=$)J>7ex@^T=c<)?xC18opd6`0Cs(coHK2K}t1bI6j93b}_6TGqU0xigx(uxp8ee@GhSYoKabBl8(asP02}1ss$Y>{@&?bqzpvZrpsu5O=%vX6
z60KTPHz|lWa!$@xq2$QYJ5MGuU$@f4WeIp8zcH9`@=UMpG|?9sgX+WCwsxi}N)T1_
z6x!|m%|X}10sNed-Pt-tnJ7}mJX1gUwfE_>5V?KQW4WB`*)~hXnL`)|oS7FK)wl2=
zn1+UKJ+*r=C^^eN&cNG9j$@hPAtq&;>HbH)r-_$DX29i&k*YpCkX$wnSN^YSl{h&0
zIwkgi5W8%qe{V6)f)KHVDY_BtqMC!~J#+I^PqfFW|5iIT*lrkza>
z_^Hx5l_mpAF;QQ>`TZw0vSW~fKKHA7!2fSAMoLLZskR=Wy4uW(Ub{Jf5ZuA?X$Nk6
zGKJs3UV}><8P7EG{dcZi4jY1#etIw7T;0eQ`U>`9yF;1lnwv97iGAg`#<)-VtjJ48
zioVEUx=UMK3iW@hxroA+9a4)gd^g_|SHJ?a6yscH}D#
ziP~Oz;d`;q<4>oQOgT}i3o6X?QTT#ILRX-W*tb)VWRAARlnszi7V)j;)O=fGHSmZp
zxI{!LcK+Cu%e_dl>8JLDJRMi>^o#ho)+^fk{DwRz6+!rqOkcmT^{19qdxG;*a^1
z0RiAVKkr!m8rK2x{HS^LhRps&u%xbf_IR-mR&l*c!PooWq98Xr@oc|5tLWBTr0*DU
zOwv+4ksCuBGdkBpmnXLy50J{X`zs0|y8GYsDcgw#L+1^^8;?~$e==t-yVfxG;nsx$
z99~eodueeV!$1pBWCwz+;4WiOV-?w-$^+HwXeT;TNBu8QbRZ{7l$fck!7-5l?e_2|
zPFLdg^%K6GSI=KjvNqMmAU=>&I{4I9iPH3j_VPx%`Yux}vv<7i0F%|vR03#w`x*2C
zwXD602EbsEVG5dW1+R8-jb`Ok}xLBSfV=E{tuutF2h~rV>l_4c!
z#Kmc+r05VxuX#jeU4Il+LJ`HIDF9f>*&-(Gjahy!xxc=-+Aw6f1KfJwpFD^y5nv>C
zz~QVuOOQ7F;sB&vRaLznEw=X7B@Rp4@~rv&>E=k6G{*h*S=M6(+!);6m}!2U5a+M_H;i9)x@v
z*QQyhQrBAwh$iN9Odk*>fAmHlB0G_}?%6uK0Y~J^D|yJ5NDH|%UKiH|K_=@WvmiMu
zMEgG(_3~37w)oxA;S2L~h=7o%LYOv32{uz!t`&vFmPnQt)37}9yop(RAR?FRx>8}o
z34I-@+r|C@nZ|)P&(2nFJUuNft#oc~Zobt(s?n?U=QbG9f2Tz!TzKuCCKKOAxr~AY
zO=69QJlq|)T%X$1@EITMG2uY&zmMB$K^jrmE6$wVH$2zc!MgsJ#~og56<}Sp4`FY5S$aZM8NGk68hpSw4OKj8{u|{D@h!
zY}ue#HrnEms^O|Gsza^FeydKYd3J*M9BteFfD1$a?Mn=2(XhGKPo|E`2kQG@PSee8I
z?Q!LS8|a-`;h<<)APe*xdwh>ZY%V~tn9z9SMDT5R2HWA;lqnRR9v!yJb;?s)ZjYh7
zH(r5(^cePY^~$rgj;y*5_P5G)w!Qgn4T|v?c*0dewM38INxL%gd$z7#%_Y{vU#RDb
zdkTqbhevuxn9n}MPmVl;wlhC!n+MJQ%gE2ZQjG1Wcdy~~^Hq6D3W=kC<6V?gbeJTKo4=7ZQM
zUoT@J#pdMT62GE>8`ZKsQmU&rJ5XVAM(I)^A)#3&
zquW8KQg6Kv_I|v}u|@VXScWHXSN#VZa$$z+gTCB*vSI*)SkSFVDND3f4cO>&R)}|l{V`iJrj%|dZ>pDcMURfVU4i4uw#-{fK167>`D8aWHFY^fbz~p_BR^jbSodsn
zr7$aLxLus_W#6c5ra(+B@#D6b1_@e1t$0*etLVoHLxtanOBoZ&KGvk^tl7Z{Mk%K(
zlT*f?#yM#pR(_ytO;u)<88%c5Zy2b^DJf}jmf#R_t45NG%vPuQ{wJfh15g@;#{nUk
zQPj&ONH>;{boGX!YIS5;@YL`y@uO2PqiE1b*OwYrH^n~hQ4S=^cZq)ok;TjheC^L*
z#&y+Vl9hQu3L6QQUwtg@abEDd8%Pb&6d<|DKe}oS%J0?H4C4Z2pmedAh=}oi>&4=G
z6j^=Z96sNL7|GXM_Z#|P!xQf;o-Tp#=s&G8=F9!e=s{vAryM#2a3Sx6dHe*-d$PB=Vl&)!&cg$%dQY552H~!!noMB#Z1T
z<&?jE3~n%4IFkT-@l%TW#i2^jPZzzWa07BQygmJ^uF7$svkT)>TWn_^@C-M*;)=FjIs&^o(J<46v`hCUbjtlWUX
z(-@sGCHS%}g78@l8q2QWv6X;<$MmZ4Uc~o2;d0bRfH~`pjmDs{@IPVZCj$?&EFxRM
zu)rI%2x1iZxAO<$OZN)`Uh%MUP$aiC~wLjB9
zX`m{+!6-ARQ&=C$V(%?3ek>v88**x&J^Elf0J-nem)_+a1Cl!tx6K&8qhAD~W1fOx
z2uKw#7>t{0=lxCw*otp6efJr)S|PW`iaEywJT*1jjJ7xYIDf;{^A8D+y=Pp35%Tlz
z%!fza6luB`9xbhT3&Oj9!1vKxJjI+F62b;ki8jYn#drdA1)9mmH$;)ntzwQ(PU7L6
z*o6%Zs{|!QP_*cEPrtM8&oAq|B>y;U0ZFr5w&`T|XqMk`D0AA|1-NmjktDx7bM{e%
zg4o$T=bdG9wkH&K1*rEHnx}{+)cz?emp$N}EO;Sim=d`~S6np*o-N(-f-RX76D#hT
zeVhPWM1LGg7X6d-7x^K`*_+w2!cPl(}Y(*&U>5)6S*ibEsoBAdt3$Fm)
zLax?H@;2+WOXP00C&<*XEdQVcrJ8lntYu$c`PmN1M!9jb4mO3T(Iyu^B7C11BE%o=
z?+W1j!e@bkY40rHxjs53CbO0RU?`JS>ps`YLq|l~fnwvIb~_4pc?>Aqs{sdkBkIGQ
zN=V<2;@I??}xk0K(2A@+Vc;K<{Sw=$y=0M>-yzw=6FG;`I!-0tN
z{ig+U@4ayXLLq8_QgXoQXTksASuA1fP{o}+z;)r5BfN5;IfnAlG`UE53kWJK|@OidNq>xDkj$a
zZw|(d_Wf>EviOdq17)W8;xKradS=W4mJ=n~ZF1sS<&W@)hlgjG7_5vuZ`gj_>bdj<
zYreUvmQ6QE#1=nDo0HRpYo$6J>>_1wAz^M{9_xK_Tcz7FSt7eIEc^#Ui@WO%8w?u3%UC`Y&2M|3P$)x!$
zg5TVO-WcbJMvQSOl^k{RfD*D+dccb@o>%@%v2Nql)B=b-NkEeVEB
zO_to1Ev&%(V$uazxl=F)T%h6abT>tKL`re9z{{)RFW`4f342+4?53?PwFhG+f~R|w
z83&uY^+4CB7fBQiJviI}Jtf;EaM*a}##f%FJ*9K}lR&H|=n3lDc(JAi*u|31fpNg4
zHn5c_A1%hERuBft4>YNkD|`y;lV7NYDO6kcxqKJ*vHIfIXcMnAU#pMZW!zI!zpY+i
zajI+#TbV4U9bWQfW0QmSsNEfT?pzd#k&9Asd)_)m3~7T6b`gqm!U=kD&+hX$m
zeTj%1Mt8UpLeV#vA!K>7J|NAUjR3fV_inM|B0jGuH|q4aOJTAZFDMIkUjGv}Z4B+W
zzHR*5r$mSZ=)2QZKa0Q~HM1<08@Zf^MYh^8ec;uNwVNRb=z-)ifGc|iL0{0Jc&z!u
z=7l7e<0e0t`Wjondo6!#$J65TtY>cmE!TC$$no1Og0;ZbE#)h`awg4Q$A()Q-7
zY3+2-Tv16xY0aFTGA2!*6c+}{dY*@++Azor2++GFCsXo?m~+8DqH4CY-43l39d04oI{P2n;_#3xdWZ|_}3xH({9m$dP!~Lzd0PwnY
zq)TvMr_tTeCfyaeWW1b;hXep5v@Pb;R4xxVztj&EA!N4cwMkm&x=Rro?3u%Xu=?`u
z#{%d?R}+{OD@|INBn_~tY(nUTzXR-SJZ)V{(gi%aKBy@7nfx$6IUeBJ{ql|rIOUeK
zSYzgV5RWuLqQiINfX>F@`F%wb=^QCPS)#8n6N!9bW}(?L+@B&(^CS#Uq=~(4{BlVp
zGa>WgCd)`}&?L@a$k@Pi0BQ1@0-lXwL5bE^G?3WuRm&wX{5KXw;u10)pKqC!2w4p!
znUa&UOnFHQ?P<`Ia*BB{nnab-Ua_oJaH6
zrH8(+FQ~(`jUfJa7VvW67ZWukerKYT9?n`1u#vBU+6b3&hiwB}bmF3s^=b2)F
zwa8Ei!|G(YE|hICVFOGm)ozSzo7n4L{=2oIIRax(yqynOq4Yul&TuArE&+8-ykU4F
z+~4teww4D~#qP8e-k%p%7&j|W*V*s40dK-))BT2g*J5Rl2PB{V(`Oi>=)E5@luhz~
zy=;Yaw9N!i3}Yd)nOP%8Y(~1jzt0VRRdL|ndb^bS5FY?MVrSHDOGvqxi*8Fazp44?
z)1P7gNOsXAJlG4&u9Y-Gg0_Xi4q3d;|Zw^SSDCZ6QhlSY#`-ZWFo43Rpo
zsopN`I@k0!?l>M1h)gB)gnoh^--gAdpm#9D1kCHSFAbTJTQ9!l6%~})PGq&KcinBO
z%MC7+L-YC=m`TJ=#n5BTbkHjDuG5+tX_0n$V}pK3?4jjF#OYeRYM6
z7)K$ReCA*;OjBpj_3RkvbI$UQ61`lCRDW>jryb-ES1^8}2zu_bz!P}9W@usl>K+-H
z2?d^4mN>W%wpN*UZ~IiHJQHHz;WW*vEcB*sZl-qpP1TwJfv?K%pJMd!0)5ZGcHM|OogLmB;
zHO|V#0mi^B4^zGoZeu)}t@^#0M%nYbCe?GN87E}#tt#8$rl4^rV^d<|B{>Dz6GSx$
zmR{;yK&-7!a|&`1NiQFYi=jyn0^^dOsMq_-oAW*9a_wQI#7-I;)7DgakB-9G+mh>#
ztOC>vo33)=fh{9>zBxA!muGuxXITMB`KNC3N~-Y%sMrb*-_DolcYfe>8^=5{naZr6
zD+0KQ^5%%;e>u!zbD$J
zsZ6>rEq>4WRK(5^Ao7A)v*DiZW+)3}0aN2)Qn;VZ1<#Llwn@qp5=J@)2h3{MWSku@
zf5z0PIjw`*eJk0Rxn1zJ_<6L7>w6WKuBj+=UFuw+Tc7?$i(6rHD!ii3c4LOx?j|>?
zxfR20j+Z=Id)4c0ebL6RBSlcek&|&j^-)WU$-r)Lu#Maz{czcPX&Vns-Gd!M5jfEj
z{c{L{c#H2+Bn}4fqD>e=uw7%n0LFdGKGq
z@OYBH-YTSO6`4YN1gHUd)Q&eez<8nG+#rx<;j<1+jr#WT4giUv~#}656+S~C$&iu60)kl
zZThY2?K$1-q78fv$e3i)vlG6xz12PbT}u!d$hfmA(4hJ&Jq~mOVL`1Y@9~z?88Ecm
z#bn20d$_&L8oi1doLYPL*v7?ZQmO;r*|Xz$^rHA$e#b!splRty&=^h?EfsEKILG;7
z8Y!v7ZOv6LWntTl$&`SY{M0&wWNDqy_i*U1uEw6HI)RQ8ikRYVMo+&qbzj
zWUI>^i*;2+AvfgaM3CLq8AJ49#?h0LhZLdrncg}&q_eMo;oaOa{>?ZfTJ?{jG;DShc-HDK6!33!CqEx&d
z=lV1@OsB&p$;D-szk|
z&LP@5ynhuL4C;s?!3idDgLuVnKVwvXSb4iOQ&X5O;QDP^xpA+4&f?ZL(%y%mj)gzGZk9>X^6`1L~Ss&uWEkx?eFgagu-@RZ}&0JEifV=aIdm8
z(N(waCo6f9A_1Ge3j29$=?=RBstSdO!&BLZg8Qk<
z3_Cs-_S4j=Za!~!QS9}P9Sqv!$?a~Yc{N$22ycpH*fm0C6>(Vh=fMMB5)uTvi
z4d=4F$REi+WbAZE_D8SkCHmH3uwrbf|G}i8J}44kMyjap2q0c1dDORs*?F{a>7(PPTM7
zeziVIe1q*oi5}4JdaE$ftTfI9@tBxAmPn5x-Bi8@M_Alk_KfAA=^+IZq|47Eww6`(pt0(l2fT%Pu?awc;aH
zqG(^7UDimao+c)J)X^5cU6Gm=R8yZ}Js)Uwp2~?}lJaUB&2|IhusqHrs{vp#Ke*=<
z*E4CYNOT||piNymoW!svrs0cRFfZ1c!Y!V;cT4ocQ5C0(pyXSlkNtjj--Pc6SCi;c
zF1m3@K>_DCg+%js#4nhlTduy=LdsiTtQCN?#}zM>?=G}vnr$jK8BtF4WRpS@e6w1o
zSVFMil^tc_Bj18hx2r8DCpTF3hpP*t%?7=#$>03WQLNr{SR<=HICOB&e$tk}(|ylT
zb5>%g`=!B-ey>y;q_KUb%Zef~=$E{~jTkpaP)~?l{ZA`J&^eh#@ORHpz
zwW@qjlUwYpWi=?&t|Dor8kA|2Dl(Q(0fBP~OtoFAwGP47bKo12|qhN^Lk6k1vo9SH#XSoop}K2z
z+G0>Io_H;j;-XbY#*;uFlhZU2=(GJWG>kYt-_w{m;v&mx?HG^=k{bKecvljQn_3jgqXGXv^w%zvJKZRy>0IO#C#{e!D%QXWos`5$Rl`q+(>9I&HSz
z;I^r=cpH2}+=-2~}AjjqW*!ceCqT>lD^oDt~eiJ);O}>*`99
za(Ie-*{g2C^QJLG3B*AEGQafL#*=UlRkOFQNuZWw$20l_Py^MnoBiLv3SYWFse#WLTL+W)&jZX)6=K&QwDXB6boo~i*thF
z3haD(4P_32@iWt;BtG6?#=HF(i78{ys@%CGfY4%Uy``M2P!$?3c59|n6rA^VJWf36
zjSoyUNf}(BVR`j#Sv(XQhwlQV7-4xGkGZ!6GEX1PYQUgb@5mFSeC=MtsiJ;`j7nfR
z&zcymRb!x^gi=PXt)O!cjKPTLGwG9KKba~u$)7iBh!SpfD^}8~vEF>+u_x$#d+ot{
zSDVl)^2mlFAT828fAQk=LCZ;$@JT;6m-BC_lsqdfVV~2Y1;_0nIU%=A%3xHSB*Z6A
zKg32G(|Y_xfCdJ#V@_A_tU+;4;Tj2?UZzGady+q@IR@*m;+APXbMck+uwX~zI)zBW
zH`~Qyzlt+D)RL$^qW0NH%?mtwaMS(AdJ$G0WffA<*O2mCyfNg5szo&dEe=+&kIa($NvMwBtgu&71P3#IA}3VUp|qcQK}&!O(JTz
z_wmz-GwGuUZc)-Jj&jQ??@tZU@%OnXEnPUxYSGOniMS4v;u+Nq+O9M1v?T2r*shlz
z0UddQ^H^@qTCqR*H0!dv>V~COF|7ZVK5>?}S88iD&FoMgM5Z^
zE*eKc>KC~Qwmmeyu^;0A)FX*`xZh1;W!wxJT^%p5mwTqn
z`E;?GJ7Bn~o_86mF4FSNy99%O#sWucoIgu{NpXkGTQ9WMefkuV++iqK1ohd1Pyp_%
zGe`3!Uv9Yo2u`Y6BV)clzk<33a9YKUc7y!uwe|^pLR!mq;r$_ewj^u=e$o8O%bhT&
zm9cg%)xcidyk5X=BWYwbU-+%(RK>c7BgqqkdafkBn^lcTZfEybyce%9vAnM|r+
zSJN-Gz8QkJ6-WiAWmH+{H?mC{b2H-k;?T^b?h
z$Cf22^`{!H=cDm^jxmPYQ+zlRjD|8Q@G^1ZW&HT@7kTff9MNJUy+#9-s)Q@QnaVZ`
z2Tj@wA~(2oYdm&2=4K>VPrLxDWtM{j83MdN#*cl6iY?DFP3p(_|MWz|JC#GR;URxP
z{^b{OW4yzPfWNiA$58(K((GyR5ARr#KN0y02@suCXCv{MSl_g4Q~%>>oNZm|cpncx
z6Sat>M!Y8NkxT9iDLi8y->WZ;+|Il({PUpFod%oA>n<8@>aqRkRwvEcm6I%CcGb3v
zd}Yq4UCsD&W&7>wi*}#-@7w_vrdJgw6(>gM7U|xvRy1Z)W(T~YNisUCVYo7xWjEojTenZ
z2j^`7lI+dAjqc~ZEIVAg7NB#cc3Ozm&CpDMTbo!JQvIRZK)g3|W3&yAF3v+h<5}9I+I%A_h|s_@Bwbo7D(y_+HdZxc
z{<|({QTN+>B7vrxB4<5zEbEN>>5Gu74`;U%)dy3%FoFIj0=%ahVMz(kL1xIkux*Rp
zo>oQk5Kr!vNV!sW6Z)B_vrWr%Vae>!Y-=0jI
z#)U{L|043-*%-eSX@*U`q(63$NyS*N&Tm|^pw5PW+kVZc!LiFh8`8$hm#~q~;WT%)
z-^MkdzEOmnKY%2P(EsRDf0r`CCX^4Vs={sIZt&2bm!Q-Kdd+CF!andGl|`6}n4BJ4
z(IAIWWk1Pe={1QN_;SG8lK}eGv$BL^SS@h2IB~W!cuq2_`lVH*tn+#=Mid5&g^Zlf
zTOr9cCR2h2jk#_Jd?|HKRo}(?v+*>wslQXDhJY}Qv}<@a3k2N-eJ4G-1lbP+7iw_7
z6HwxjlRN&UTv77!>iD?r&}qXJ`|hA%8u5Wzc^B0nAIq92Nl~4~ZgSM+fZ1aGCbyuS
zBJHM#l#65oTh$@1S0zckH6XWP6@SJo`OIS>k*QU8s1<4oEUB01DR{y-;ZCyBiP`C5
z+c-gUFY{y}=QGF!<|nA?X0m6i*Syr3NT6zb#D!{1MVAPhKTSnx!#;rR!?Kl3{418I>~4hHq|
zaJF!4Ayv*-Sjy~S^3F=Jl&?19v-Qf_=i~ds3z)6d`Hz>dl7#rKJz~Yr+julh3bpXt
z-X1lqqmvnER%oVtQGOh}X)N~F1Fc2@F|aZ5#R9}!)6XLGC!t6B4Pp%rWPW&r{d3v#
ze$&x`m6%3VHT`&q8g2Jwhh_Kd2DNhM6!7^P5}#aadfSqgzF7hBD3_+;9AeC
zZ%BK$+G-gE%068zJG6wD8V%Tum^z9QaesLvgAH6=JrJmqI)Xt#w9>2+X%&kvKY#h6
zkepigJ>hym-(zls;k`5sSX6f+Rwa5%OsSX;Olhz>R`BRCNCZ
z!&mYCeZni7b5*NJ17+Kkxx)7Kg7NgMv(n$P9gQT3{9sB(wM0H^)n
zBsX98LYj{QT=#TQDB*J-*tvd`RXH!>?(O6sB8;0=y3W~V`z0t!#gkv6gRt^U&bCu^
zB6Dj%*+MRygY#KST`P^dErK!7cmUOjb}%8&;t-pm)l(|%NG&Mpl1JL9-MAWx
zNq&0@H^5@5!noRXLuIDfv+{8Y^HRY7wvmQr4ajMkeJ!F>^7u{OJ{
zwV%;QePcrc`_1Q2MDSrOqb%4X-TdTeKcW%4XJUjb6DjE54sVjvX
zTa5-tPARE8{c+?Pro;Lg<~-0K+C6@tjE1)O+YjRR6sTJ>-7LLL&!D2JO%09^cB1~-
zD#^HqXX2r1rju+GhcRy|^%$pe#)cx7R5JR{yyVHP3CSFu$~g$RmxxQ#7h6h3-xFzR
zE1-;NMKWn;pY*J?Apt5aOvMX>u^wxUKx$+zF0M6QJh=*8{^>^NJp0*N`6x1GACB*t
z>f7+a{jO-sU&xBhXZ=4P2S?#Tw1I~+Jl*^aGRrbZS3{EfGLZ$36YyZ9Nz>7EA?HC?
zp|?SLOL$E!xY|`fLd#RUW~KGP{myDb
z@x#xI8;jPK-?N|Xq*29Sphv1RIQ1EqzsxCNtf@!TO_BI~%I*vod@yW)ULy7SSo>j3
ziKPmp`Kf!K1ul);Zw6=&;>3L9*R0+NIqT>`9(EDA+!8$3y`UG+%!*!otr#pl@
zo~LsD8Htt25|u1nzJ&^2XG0q<$86U>4K+MCuG{Ue1Yx$>e!tn(cYW48S&|6&kTBSI
z+AbkqZD24WJcCFgDBc%`zduJWc*zKPV>Li*!{wZNG-lovaa-*={{(qZkJYZ<*9nJ^
zBlT+_Dtc#E7jcln-St_#3tXcQ!6vg9FH+|UxW9!Dzrm!3;pz|dW!Q32O9KBtcX2f8
zz}xw$YfT{}QED|sAfcW7hH83)y-T!s8wQC-KWy&a!WtJ+~Tf!(*@kE2VL7qd<@_-CN(7V
z>>I68UsGbSK=Ms`4;Y
zoPs-doh#CZ$}CXX(?KBwEDVl`D54Lsyydj*Lqk0QyQ8#l!suFfNW}RnPHDP!q(2W}
zLhaoa=pOhAuDt~aZy_R8;1AabMv@8X%obm;K91(E{0xEq{uK_3yB1mPdG-7`WhLak
zsSG~#lgeYa-c!Qxr`-O7=h?|fGg--aHIyokx9c3^GH911XjK@^^tkZy-Y`wJ46YWkB-|l@&IOm2^ohFkB5hW0hcRK;J96cWTF@mhsm2pd#3o^ZX-t{&!
zblz=KCoWfS?`s$@i*a@pr2*`7+0X%2f!zD~ZI?-ZIP%=>Y(w{=7aTSG?$bSqF+bFToaboAl+XjR|Sjc54QM-cgMa$rL35FL$k5hX>=6eGo4#U|&
zn-`pdWS)lva4~bUzb&$@xHJJBou~BH<-*a15B74#>$cItWWeD4tnlSVSKQqV>{?Yc
z77W*zL&zP*WFAQ=Kl>T4he<=)!=dbA!a6DaZq>rR983b1kb#?5Jifz{rbD1!!}{bK
zlcuV0Eu9(-0l}>edpEsg{Q*x@1xEpU<05*(&e5?tM}xEj>xd&nLla@X3`uOUtYf
zbiE;$E_ig22&aYzroo50PTKCT?ZB`k!+e=Y;w~0-9{~|Sfi(~H?`lh^1$@#z_esJ3
zXsY&E&2ILpeH#)dUk3(;U_Ry*l?*{zSr5|K%abDw!KT`fi?8bhvSvpKzp8!iZPTbm
z;km)6%;4iu$$;aDgj5Qe_v#K`zWarp7~El-Gx~k$?eKFN$DEI)y6M*{*>AN|J*=6w
zinm-WHeQSIZ&ftJHG^oSXy&14ap|f?LqNc@6gr!EY747Pg
znAk%hxIZgzyJ3nBu;+RqDstQPv)W}}fqI&iRX}->;=Zjcy+LF4+@9HI9+!=-r_J=D
zx7r3u!%FMh8>Ox(EB}jlMsa{28t)@4x!jQD&rM8BNXS4YWO9R2133MtgD*?jfKUv-
zz4!MkC>V`jTYSh%M|FVE?QmH_A9z}1VPU1vo~76*Q1@C($1JEsy@PesnwDvp_@%Ia8OIm{2K)SZR-@fVYovhV4V7nXuFcL@7+fW;fUBWY
z(+J?T5n13wgM{Rx=BU1(6i;vQD@QVSr!N27cFPNuTi|q~pSQm6ZnFBFqS!qHcfA8B
zwV>wS@%A>ex!}E2eu2&Bwu@K6Z#`L}U;318bPDZE5ecZ27W502Y>Z)MG|Qp#l**J`
zzxCco;m!p$82Xt(Vr7T$br`wcigZ_lNC)#S-&6SH`TigF-ZG%7Em{K=1SBK`Q9&uC
zC6tn8qo{<0gfxhB2uMnU*q|WNQj*dLf;5=4NO!1oOE+&U6x^J;_q{*w&;4@_?!DJq
zbIm!&m?OUN4dSpO+mJ({L21E{Pi*#(h9zNcI=7~uUM;Y`iY>8`VzfUjZ}Obf)ydrW
zWXE1Q-c45>wZ&W0VVGi6RfYU7rGr=%%iP`SC$Q4NF$}>SDeGdCk7A>7o$>mpRu?~7
z`?)zyT`8DTm*dIta}FlC6)NTH91Cl=%J83x#p;~%OJ0&TZs95%ZgI&oSSaGs#?C)*
z;DAhgID2;S>V|rVHgPQhZBEfuS5@)e
zBrB?1Bc^7%aD*d}@O4F<5q)A()#rA#9~IrExrbtnPT!90A)M7%vLqdeUaQ5n>T%6!
zW>wONQZKAk6P2(3%xzmS{n=0D*3gi*8D5T`RhjmA70=+DMli7qCc6ua>|N@k$u{DY
z6g3J9%3d9hx~|Y%+98w~g75D*DqumSN56&z_Od;;ZoT&9FU3|P`RuxvDGf~62oh@wN7LJBe*T6;>WGE`Rqr*TYVzV
zGv~r92x$o~K>{e*46j$d`CzzJ=}KaRs9Wd7bQ1l%loqqa3seu416WvCq@XT>No;F9
z$3n5JY(ZWN4~wHs30nDKERr9G+JSxI;fIoqn_N8mSUxLb>N1M;Q2ab
z1J0h7{71DVOFE-z!Icf~&IHFF^*vbg`eUBxk(l5?g4fl?RncuFrwM1xLUQGGR`6Xc
zw5Bx;w5EF%Ie1)F&X04=r(M0E@%9>l#^>g`zGMoE-`E9kT;We%N4Lho8W
z7k>(r+sj1+
zqZ=iO4+NlWVaj%>_H{>k2D0__7s7w@swe-GeeXAxeq^ZnYrgaGGc&fP*4O9i2Saq-Z<&_O+srO?+xi_kld%7qJq=P1
z^D7Mh775X$pOPsfu~zZ|INe1Zj}7}c`{>E_D`PyVKN8s!)VF_I%4~hRK`;4;R(d;)
z!)k2g+j*y|0ETP@b}Pe3))WfudGcab1yzLK~6)XJ79Cd{s#DgP^`GQxl|@2NzE-JnBrc&Q@4k80-%b4aramNXl;Qt
z|E8Q+WZr){=;w3eM3GbC?_8zy_kY~K9VrRH!;+4%Q#j`)KdbG9)Kjgm%-!5a$!kV`
z>XL6(N5y_DVR2eU#?Ser#ks3CKQ`ndMP63D+0J*Ct727$9Cf;p|0-G9HKAf2@W%Ko
zP+K6W!{8`lpH>MmExr^xyO&xx4;lWE6$0-;MGB&ZmAEIV4oB)Lcl
zU5jxJ)6c#Xv0Hp>7OQ<*n@r9_$pmI$n({K|x~;v>cB%LrDb{i6Y;Vx{9aqXixD4Em
zE<(*585U6$oj@W~KW&GcwAe{lUpC71U_yaZ`O6Cr3`QFh0wN;VR`-7cDCV6zp2GyS
zFPO4j{Dzmj0g3_Rx?SoS9wtFeT|03#N-SW0AI1Y*dU|>&@w_$OkUoJ`w}h2iRi+cU3pIH
z)(MG;)n8v;FzhRJ{=~-(GNeYkq_fWPgc--(e!O
zIbE~OsLIySBrK{f1-K}%%^pVyq#!mL9NL(JOYv7Y%ht1B)0Z&ihUNXv!tKLs#T*Nu
z+9nqe^lDVp7)#zMDKjRZXP}p2vH!8;pEH;Y6*^jsU(+abrHc+mKSqLGedLwJ;Vh3#
z&2TQb6R%7?_l!EFfbueJN-WT}4kT)4HL(ty8hUj#u9%BodTA0No`CaYO!klde+Jbw
zP6JsDCRoKXF@?$@lu&ZG?d6tJU{);BJg?{!WpoTKQ+moSz_|JAJJ(AwOYCC=V~L_o
z+DK&)Opb=vf*D^Z4$T$+>gm$cUE{5{ZtA~+lrhx#c4Qk`wtddK*>xmeRx{7ynbdPe
za{jU;!(ynJJb0Y%3KuuGNc!lT9S}Ro8LVZ#hfJ7I|9sCgs^mvagKqVHdusnoCfCU$
zH)9WNZ-ZW@EXxK#idM_e6Yb8ZM#V;h=`SU{ZBouiDc}G@|1^`$k_gvK5=}w_Y{
z2fyDng8Q*VJ7!buUWjTF>H0pesjAA}n$bJZeEu1{gDj@Y;>4|@DW01eGd=_mJtsOj
zJM_k5%Y6kX5lRYt<_UiXbBom=48uvSRe0Ys36CxOwWM4{L@=Jbynox2hb$bF&KgHe
zIFU%0=L`H1V&Cha6jt^)D5%8wLMwli@N-r
zOKuAZ!bus;@->y#TK58dx93&~Gnry^wmtS9m)*}BQgyYf8?yj!t?qH`n^YpQI|5pc
zlLU0|^79j}m!Q6eK9j(%r==vQMEiZ+#u0`#3pM97dB*+PS_x^oK
z-puf~(m1?S^kNx#jzT||4%E1gNc@=3Jpa~93F
z;pzyV1XTA0mY1V!X^?s_PZ12q2)E=j(Z^3H)jVE3F9`8oPP^w&1dP1Ar{_`RB)sP>
z9YQjr>{Yft$O!~5&dE@;+HlXdqP{BC!$~D-ax=|B=XHlt(Z!*lTMLy)!&@Hqx4dXQr_Jt7A`g${ICZcxB
z_haLeb%#YoO?ip1xju(FPJ9ni*`;}ZT_0K&0FxxRwZ?N%wAt3CT>0%MEj&MQo
zn`uXOLV+KGwkmJKMuHY_Ql7!QoX%d@{ECG*&xCFX0^~s_Po)!YWAEU#S+Z75e2WXs
zg3^2+on0Dz^1`ZAJy9-_HP@^qyQ3aCh-o=791kW&KuOsMfeS%E_v;O8Q%Aiy6XMD9
zzt$C|MOYTWsUAPu>ElU^hg6P3oQ0tewh3>^)VZ{1-%y_K;Tr;u5hCu_6qOWhr}EbF
zZ#;QSdiaFFQs}~$yFPfN9-_U@su^8Z4dddK3lxDE*AFS4LTcYQ!?OlIn6p(Gq|1A`$C!r&;0cqp3qJ{__Z-)fwx^-P9^
zWu-Np8>u}}q*Ir17`8tjQ22cV&;?G0XZ`LSz?KTaZ(>+Guv+8Oq!Xw^yilE6dv_U&
zqT3=X!bc>;JSyUVW%2MOC%4UU0mm-k#NQ-4UB%1;Kg*kRx7I)DzHeby?5Td$HF>U^
zLzB1Wanx;k<%y2%u`4wG}uG7VVZl`DQ^hNi&
zd2WWMZT@)Gqrd3D$99xm=)M@oI}EQyqT_3#?fK$Snt57W3|lb-Zs%`b_jg@-TUI5T
zYkqvVYt1-XLptk2QgrpFz_hZ{6N6O+Rt6PvTzQVu5k)ttn>;>8L#5c(O`XHDxBGtr
zHIJM9ipeupmB9dw#M~A)k7scePo
zm%Q_S3nNMo%Vvu{`%HpAa`0Hkpuoy8XiZZO@b{+$!l(chvoVK_cbdn`_%5Z72H4m5
zA6yueT2%45eD3J_ehqDf^^V&twr_mjq<8bR^`^CFlrpnkw+`f>=N3`o*l>{Q6b*C<
zbJ1Tq!pbil;&OXV&otH2i2-<4h8NZ=oqUlu5t&@5m#;G+of&4AywiM2Y|gaAvQF@k
zpI-?67{T%F0~+-WtI%dSC4TXYrZi{@t
z^|HgRV?53K>=yylQ}VAebL?taC@!w-JBhO?vh@e8b2Regw+4d17}qaedg|j-bNYo-
zn!{u#7kIyTgmrFVC0B9}29_@0fK20%lhaKJ35m)J#3J;C6UC%EdPY@uYu5SZ079;(
zpoU@O&zcr$KFjSLXx^Pn{@Vz642HMzPeK-?7{cz_Fx6{Eu
z&=GoLw_QvAw;}u)4BB6qbpQVL+jl4r!i30`qv3+X_|vCP^NRX<+A8bIDBiu3*z25gu=O_Z*~$bu3F{ODbk{rsHNZb%fAhXIj=mJT@&i3L<?Yndc|8OH>WE1T_@Z?ubz>X@cBG4+d
z8x23Nq^OwDQSrMD{U5FdLx>H4`@-hMChU8LAc?9!7Q~H>lg;WRM@IecbbbA%h_li8
zaMyd?epTy-2u3_6hW0`8#h2(azO#bAKkFBUhesI8%A)_K|1%~)pxfR&rHId<+paG8
z%T+u7q-Fv~xq^^_aL@c)#y#U5cuJWCVg>*B!Z5%#gsfrpYz+3E$KoJ}FdT0b*}q)|
z3eXdzi3pV-iGFJ!e1Kgfed*F?-X}!(_=QF^LjQgTRCx|i?aikBj76-~9Vw;l--qE+
zGd^mDxqm;E!tsl{P9*woe*GFaQJ@+;k
zaG|fK=Q;fNgoKXCBixc5hI&lsy8vx~@NjAdnC{DjN1yD5fRNYxUrhHBVit*f_PU0C
zKTiS#y>f3^`47Y1pXvDhs@}+}o;mYr&v4fWV13v84zd5@t@tMm67vdC(JD{p(AS~d
z9N?wZ=Jx}hq7P$t1-2h){RGyx%|C08%(^1{H?B87j&3`EdoTDsP)?*^QzlMU7hSED
zXYBi*rc{Rxrd0LdsYsZ}-7N)_4QITSfp&b?e@6fI*YB1PEKEto14wQD84!ev;0^IA
zl@S>IKfk1>59~ghL&7cSI}E5*fLf=zzFvNPW9503+oear|9mxaTLm5r(Mitmyh21g(OBZ{S`
zfJ3syTyT84K3b+NX0J@_y%+J}elic@6FlMD(T$zyXE+UpBg43qDf+zaeEI9M{O8Ep
z@KC&OeM2;?-c9U(sp`BL_jNd0S5fsm8_#Z2ygRD&3
z?QO2B-5qM}1s4|Vi)2L?;@xkzm&iH*#w&Lx5#h-C^^Mn-@w8}pJaZ$?)3O(p0@7dD
zn3-_{#Krl~k{o7W#$~B`;sEboi_dBbM+TPp^n)F{IcG%mKD?0Uxrzy?ATg
z*%28AWC|TUQ(+z=^=EooT3S@fqV$@CtN^&+20Ac%x7vg`za(<4fxpc_y1`fUFZMq8HlnI8p2tR?tGh2yii!bkfpM|1#bdAj
z4Cn8EZBqglyL^6#zDH;)k&Kp(NcP95qj=>1{%C$h5aG+cd^mgL?A_t(k4mnaH6BGD
z(O%hJL^u+4mb@F#KSmMZE^FN2M5fxN|C+TF`aJF(_#KXNt95TV>EHDSzi!gcgQ5FQ
zp#a>zPM%}`da}X1FC(Y-0elto_af23R7t(I5N-|OrT>pJmah3W7>AHjt%T3K^-4J3
zO|!}E>M!&+L|^nT&LOmpJEY3f6^?0qK^7|X
z-*`4*@$ip{ZLRp7;I}S6=++o7#dWMFyBh;yM0Z;$Y&SEAd>=30-~L)(hv|gy?XRg%
z@&7|At)t-jFGa?_$IwOY4jTzM`CyB_t81|;)qfmzw7-0N$NFP?VjEcMw1R@2sTFtc
zVr2@l{)ZwccLH4_FdcRF2Ko?jvpqBSv2aOZQ-n?Z`^A-X7}I`I!(UPPYd3)B&K<2COI{~ByRNw}%&`5U#0qcIxkprPfR|Eu&
zyt*(h1KcbZE3WJ+_C|#5+4+qLr0@Q$BOMDe!@Qhal1|SQ{^H8{6JnOQECmDc)&bW9
zARtDV`3C@0@K!lZtzo=)elf{Q7HN{;u`#ZjjHBbEGvZAw>JP!Fjuc9M>zISo!tI^J
z20tpFooY-~<%Hslr>b+Ge2!|`L*gk2Nv=#4&&gpXu#p2$X?su7vSVz08!(Nmx`n4
zV4>m1hli>%5W2q5TKOtgrH+PM<_^rd@2;|V80nU|m)K2qssM7JO4)gUWM_8#wLA|h
zR*AHuhv&S`9G(dhA8SmI(PMrW_!ps?pr2`0l9zfy$?Hla6%wtPYZe6D=O0eA8LmtL
z=mm%r(+{(2G!F}ut~TvUYJ#e62s;{#H7DCF&*<4^;Arz&cB@f4kI+TC6`UTfkDdx$
zwf=+AameuU%3u-!myA{4pSCL9uq_K1Zli~>yt2q2K3&{wU(8X9NR4@~^F)llsOGcPu?Jn;7I(LdlrC^LwPQlHqf
zabVCE>|aY<9LbsosN;?Nt4&z6^0U@~WdDtv?i-VRz>1Z^4O
z6<(e<(haHUg8CsKUwMKg)p*E)j}YF`K-e=6KAnX^*C^p3XAoMhmep^GkI0yu`Sf;T
z&LlG|V1mgX+2hvHMqsUDOES>Y-zD2``xk9s{R#zP>$GR)N0Ms;p?`&XQZ(?eE3S2@
zg?#hz@!_7dcPhig#;>Gu?ABXcD28?@8g2>QU;A0Lx$3o}+*eI=45D26`f_xBcpae5
zA{r-kJG?UmAWhTu^o@evfIdN+EYw51Wdi?`-pUVqOcH-pZSN?AyRXWj6G(bk5R201
z&M%dp0q2v}#FMHt)m@Oj5qLbRX1pL|VH}uafE>4)hYCnRi)Y>wkRCepY*sl;%%c8>(Js$R7ea
zK)@EFHrWb~b_Vrp?AK3uLE8Tz$Gu?=*j%<&iEAtG?-3w58K|iG`dsDP>(>??Tpd|@
z-pN33_%uJWtI*#3Ii4AitRm%LWVkIE+CqNZoK}{dZ>mzpfp2%9CZxVo-aO8vAvRWq
zKygRGd!dlSRr#?4`lOL$V;cD5cWBmp-1*t|DdpvVq`lOwsH*sl%aSFxJmHD4&fL^K
zfi37%V6(ZtVoJ$UXZYFO@d`>o7z-yN%Vt-GqctB0?@#85Q}Ux08JK+A
z$5+8y6PUR#83^1phgDOr)pc&u3vMgMN_=oZv(E>KqAQ&8tC*XZ@uZ}rCwPf*_ULq6
zS`rOR1W0z9Lq91;Mp`=HIg@-fB+~{!H=kY8h(4{#5q3uJd`eg-u*9IG0Xa4RDf~ko
z6d}CkK{_%%I1)W>OFdPECBsoJY5S5PB~k+lH9_JLX;tTKBmWe|1nF%39f{nB`3~ZP
ztnIP$573wFDcQ_XTKmyPhKjm-7=Q7u?>;T>F0kDneg)cCZ0kQDJ3i1XntlmH@lYM@
z0?^CLfNm%bH@6PNkuq&ySf~!+j82nzTA
zd(N{blES
zq8xCcJBCd7IjbrA
zU!#%KH}fZY(7=BY>1aI=(exw}+eO^5*G%XLk1Iwy2DwHgN&FfMI
zoMHkli)l_k`K<#=*f`vN-6=)b62wUL6g5(+_-rqoJR^^LuOQ#eWfM2
z+tkzB=TyHugswcen$fieN)2zII;h6H6pokqlsL?tG5<0g_hA&_1;U|4*$6<@w8?qP
z2q8x~9HJAUZ2OV{A(i+7k!?T%r*i<1^5)H?yoMZO`SBTq7!Pzx>n3teO}*9y7fjG$
zg89RJ!kni`-N%F+(z*eTbS)Q0x5uW2y;bx#|2QC^#DJq>j_Ja0a-3g_dN;@uM-mhh
z!wcK$4TA$6yN37gsVry23{S4o=Cf$;@11vcA4^V(cU_s)SpeddY{zMxj=Y;MB7~h9
z1v5Irpl^%O;$?ukb;`*=SD$$2g)t+cbPvElnBR#ZI>CQE-0Vm)pdsolhU;Fa+Lggu
z#n!BjD!NeeEOZ!0O99R!0dSH5O!Cwn1-40SYFXuMqr;_996d{;NzsLlPP^8hB6Krk
zRiP$A-+-um+<`BVnt*_KZc%99Fa7x)(T+|0%Z~7r#g-0cGkszKKbie}+)8aN>Uiq4
zbW%4gMM3*91I(Z|={T3301ty;I&fmQI%ph+3hKtV%fuAW@xoLP$Yz}8>dsyCAZm-H
zh_Sok*kd08Z6v-lBOK}TKM}t8jleAg2)pcOWn_y+vnlT#E*``t{SGRA
zTV%anx>vxy5ajQF(8wJCszRf#ElvHo1xqn%&FAUUU>N8_jz{Ofjtvub%Ff@P2|r;3
zH}`DYS6?~UF1Z;;D=z^aD9xzCk0u1Lx9V$!S96SH5wM@|>PVbon%ZlF3Sr0jcA@Fw
zQ%JQ9k45{}Ig+lO0iaSq7fA;_@ZR|V5qT00%u;u~E1t^v-EKpB6@B~V>!7nosjmQ%
zh@OR|YS@Wll+*As*aJ&l9@TlJi$EP5GPF|I9mP?y1Yn*T=oW)iyW5O>rE6GiZaR
z26r)YDtXsmX@qP&tL#b2-NOe)g5t+ru0}dbe6t=7JN1`kjl|-49)x2s>b>3V09k7+
z8NdwOVUQ9-e~Eq*4Bc&-@OgH_8T2pEtW3D}7#JChhzxKZRvwHtmR2+#c9pYQ(j
z%Tb5{uPpgf?0$lbn*__%(ElWg6*tT7FK#N@ZBRrATI<6(&l+^y;VFV#d*hLaOZV|h
z|8cPqB>k#=sfYJ$u1efxe8gI=?ABr2ofh1|)W@`M(5h5Z4C$`M{f&*!$5HjaS6sncR=m5OsFC
zod28S34KgEllATp><#{>^Lw64bQGRT4qYu~)X(mAMEm=LAx8hW4SDUIn*q3m6rQRB
zVH$cy#_xNb3bbFr?}?CH^1E|>FVPnWHG?crV{g?-a$$XEbe~|anx|z#RtV2NV~^zI
zZCjeqW3TH;5C*!rw|9u1sD4-^Y|D!>=u?WThoLJoF=|&uOaA^`Wb(gWe)k^4r}S>U
z2%|@T?|8(T(>Q-(ucM6aBb0~1Lj~qa&jO-t=d8jafyh!B!(t^Zxq(UHuDGIo|Ld2u
zFW~I0HnQf}V^}X9W>$=U2tA-G%&j3rH}?;RA=;xNkWFGcMScq1@AP6j{Lt4o5W)Hu
zod*`=cxdKRHP(*4kM>UdJ`7_#2_f@*UqFka4YN|v3u&ncpm9SmmJ~Cz1RgzkQdj8H
zz>Pjhdp7{Cfe*}N$NSoga3il~Yy`#RG~aMp$?D@(xHqJ?>cV-StLGSD7pN9qC||*Z
zdV?cw3TRLLGmXEei3fHC{*$#zHn8WuVcxoR%Vw;Zs_fEmf
zU$5TT-i=UUq+D3F-JGLR!wa%Sl0UzH%2_-)4<$Ex>`2V6q5KN|f4ms-p5JKUF8tkw
zLqM4Ll=Sw-LR$iyB~K~2yy><+C9RJq}r=Er?|4A1|3Guk*1HbdpTP@KQrc+bR_FTy1qn@7%~_I&)G
z7gfH3d8P>ZA^g9;5MifZ58*M~-`pUe%@u{(kaK|V9rwvn%0~TQRJK5{F=tt#s8ZRWq13Wb;
z7akB*o$GVekH+94{o6?Xh?8Up?6=YHv3qhGcMHpVOI%74^P$J)|B&3EK#rgvqGzSi
zpHa!^1u+;G?NR9eqGPZ87bT7UPfu>puQs3EZUmyghQBO?B@fROD#D=;xvO`nQU8_R
z$tpv=*?3TiyDiIi7_6a#FH-yhjtL5JaKY7i2JW&Xg!bye
z^v}_bm?S{2FyVjO|nGcf|LJ@#!c=d=6QMFbAlGO@Q>pRrier)t|Y|is6w?JgTd@wkTq=gv0q$!DTScVg#RTik
zT2iFm-NAnTWE|iD2x+E-d$f)=aoa
zu4zFi`u-iZ<~Z;&O7CB!0<&A4c5q8<&ALO8VX3HCE02GQLj&&`7geKezS5T7GGe*Cnmu3PRK%bw2l?h{j@wNN^XzU`qr
z@6WEFZNh|E5As<5K{Mp1zA5MeC9Mh2ZgF-Zr(SwxcCaM8E>cK!s%*<$DYFc>1(iA<
ze3{zmBfbrVi1VBIiH@eR_c2`~41x0aR>M#_VGRr?L%DDp`AWT*?yaNS%P-ju_8wVB(dLq=#*Eh%v8`(26;
zD|5pM9$TBEV>gCs!)6<0IIAEDu^t7L&-L3mDQKt0D=x)y9A`YrAfARWz!|vK2cbpT
z*>_i@q@-*rs2m?^n5;nI!vJ&-k`pk9UZYe?+%RdVJ@bH=hR^FpatXDP!(rbg|3#n;
zv+96^LG@HqciZ(#gl6&JdbAQiz`W6u#Enyt1lJye@D(+;kuTCz4PTIWL4aEfnOCr6
zp3;a}QjR=$2;m?--d18Q`^nChb+*XP2Uclh?SG{C*I%>`5L1@!HU|>9H3;$KI4}=&
zuT@ZXG-h8F8X9U4y84ps-*cF*uK
zS5T3_-Y@g4$ixtbbVU->+|mm%L4)x`F+#q^fY4B8D6vhz)6j%0b0C{Km-<>nHxg`E
zVOYD1M~1{-hnlLX1ntgb%2TyL$2}@*&t}*SN7LQ
zP0a@t4`ZEIN0luSQSwx}rrUdD`rDK`X{$?AA7*AYg#XVLl!n06A-l4)9J@{N!+|dm
z!5bOZf99}cAAklWDw~jm39YC**T;s?21C5#z9>HTUkr+0EzFAyw?nnKTB!X|8gESK
zD3`x;-0XaF){9#q9
z6Nawb@Mz`;*O_n;8E=b5O`fs_~P9ns4rtq_LhD~fZ&^+&aX
zzpCPaQb?zvda-$D10w6spkr*Pp2?*wh)z{2$%`2J@_NeGfPBzGc0hWLH-$pis6OiE
zmj;jSu=upc%6!?YBJsBPJS0~`E+HFm5qD&bot3qE)uY)>Wn=5^9TYSE$dY@wETU%Q
zjtJ8XYQ*L{-Fw{Fk$b{Qv(4t+AOU$mtg8tAr;vTNh&zROgx~(>oe3d(jUxrwgmS(4
z4Z;-+I-A$Oq^ai^KoDohn`6~J0dmk7OEMSss0p|@TLW@6X-5)0;0xLV*hpg|MuMk}
zo^f;+?$Z-^K5{U`FzL>JZIXDB@I=FeV2D;UZVzNd)Q@oF)YKT-ts2$@2SJVXG=)gd
zcrjEKhh34Ph)x-R${uPQt)fnNcahR0J^^Z=
zksjR7-Kv!HHz5__hnLs5kCQO4ssAiH^q1#pYqE6>y592~wc4PYXzt$&VUb#e`P{U3
z5|jffpjlU7j;<`G0dh><_WX}b!pgF9wnZ4`&?
zn0h7cVDdYMSGXn$%}}@goc)8VES&lI<1TlJc=Z$VPp{GcMv%Tmz3g_1SEv$J>hw9%1ig<9I
zs(%%ZSJV+ahyg0e6fTL6K~WAzRBm3Km`AY-?K*bl)K4u`7(`LvuoyXlt;q*dxI^lH{*v}R0%w4
z9a3kuVY>d|o;X+RQ2-Xti{5DA0e?KNd#&80n_*yHE
zPiLG7!@#8Y3~4N5>}HeTD997w*o5EgGP#y?I=#5i@
zv(7*jaHZk>7RNN*7T*bR+7nnjC{{x{Z)NF2i;=7`eWx}jtvqso+`Q<{gFYMVXI9S3
zheqw|c(E=EEPUp&L}cm#wqQZ9f%cZ))^mBQaawHKL9Vz4;@meT4e!~;qdt82F^lYs
zM6CJ!jw&h2XUnWwrjwLhme)GY=GqGT6?lY%C1cof?Ruu3h`FdkziraJq>HIZEAZh+
z4-s;-T(V554^KCNGn5N*8gs7z(vdPb>8>!GUW0Y%lFnwpIUqO)gUY&|2kc-lA6?>9
zYRoB)l1qC!-c&R-3o`5|_6t&aY%<
z%vdH@d#&`QMQ81Dh61{B=ed3fnO7~YM1lL_)Nr+H_JmidVs7Oi@dR|7=Cp}47bw;B
z*x=>Xwne%h`~Sd18mrBo9eF^dudko8E~vF@l;d88SmgVk8QjJYdvQW~w|9Vs;;q*y
zaaQm0vKUDc;%#~=no&5Gf|7e}{{raGLm3RZy_*?2AVxCSn>(1};#88VEz`{dGo-C=
z$0?Jdck9{_KY$%^fzRR6tvGSl~0?l
z=2}%MI{<`b_%8(whqha{`(MCREpX4%8eKi}g)oTq#i0{|c9Md?z<*i0W*RivK8z%M
z<%FtOn$kKhM4ffi!Xp+foD5Rw9|nd$wgKNS=M~+
zX@1ioB<2mBhOl@BfLxNHCVXH#zD>tP7RlCZ(!P?(B|O3ZRpZ)c)wKu++*?LMx8ufx
zbR`{e^YNP-YcB>@@+>-Ttg3;*0qv|mV;#%A*>w%_mmHKli8AdMlTIl~WKBP!i1726
z&rJ<0MPLl$uf9~}jBvStOT?=}xt?HJ>2s8Gs^g$Qic1&VqvI7in<
zi%nY{s~Q7*4Fg!rvezmgJz)?Du~x%Kc9CIUPYzWVNI-@AJ`=>;@LuqBnF@A28xn%5
z4!e3wW!qL$IKn)&>h`hN#YM_5v+P->M<1<_<5VDrb|FF+!@$VM7?|{>8zl2CHe~FB
z&G_bOkYqbb{v6E
zL5tUfRtX-33Fvc6PDs)i+a=;WPokYsXZa~oMA}=A#|QGPI3JNJ)TzR84kW)|zY3@l
zi5&I=3T&)%^>O?l=zD&_(y
zVW-X5nn1r)BMHWFJP39}p!|3b?fn2tF9g{2T}Sjs)DL{H_yKNGAb=La)yFx{+6jG~5b3y>
z@Ph(O?}7*%{(_Jondt?j)h~dko&$}QxSgqh?r52P*gK?w0Z)!1m=Ut~)8}=VPEFdI
z-UIxCiwNVDP+B+wwX^$Qfq|>Svs~4dcNz>*tqmm@s%B`KnxYGD`JE|&6#am`p-=dP2II!#b?n-~
z`3vH!b2NwsvyMeI#9pkMZpgkd5gz&Y&-vwV57B{YK}}~Q7iV)N2CT2v+@*;Q{hLCj
z7jiZbh!h5)03RP;j6OBfMLZ*pK?8KtW
z*AqG0VlNUjJixiT0pNgagDO%o{#-Dgz@>THQpQn{BttdA)Nfg`a4ogh_EhTBqhVe}
zOLI?Uuzs)PWd>I8>~F8(EVD@A%^LF)+a%dw2nvQ=Z6#QVwRuLL$EH>dU7e*JN**92
zJIS!I!!OhfuIlIElz`TuUE2=grYR{N!4JspaLR+
zUq~r0JN!USWi>|oX-4+TzJbf}5>8VE!vhYgMl2iZ50FXXmqqOlCw&;2|2AQ
zx4JYr093v(B(y6KXAdX87}ANvCF*Nc-W4*jYvu*Qj$A%54BH%`b)jIr3p{fUVzz>$xSCChEt=V
znmG{o4v2+kJn^!sMhh&u^BdS6CH<&0S$qY6i7>XS;VffdLMO{fi3KfIF-3=_U7fl=
z{3Y@^)a>Uw<_xIozM#C_KmJlZ06D(IHkacLvnp3=>F!^j_bV!WTxPf^+?C|$BvL{&
zO{d%0Ob4YP$=H(jLa=h|vpsQ0h{5~3LRU@Bw&C1$JW`nI^Y#fxf{D3Ryy?tBVk=`E!&I>}6j%$Ej`rx@I1HVhr@K
z*Q~q1^P^Cp3I8N1mN3KtYRcAXsNELAh99}JOd-i3S`ph(cJdUbhY&YxxrT$v;XMf7
zUSf%RxO~V>Nvm%nfjVb9PmwrdLIj4at~4Zjgw*9QR>a9OHUpA5@mWuoyjI(!(@7a3
zx$Hh~XT(TiF%t+5`zLA3D47aCYo3Etz;SO}{RV`y8QU(b6Drum2OVe9+
z*??U8wQg}&Gk$=!<`nlre5bA;oV`}}$gxygvE#)L*2@*1ph9s?W()yFS7MYZ9*Vqo
zzfR$~&kN4(0utW6kv5#iw#$dhbus06%~>yQX;C!2S9%2MNz=fcm3K&6D
zfemBR<=r&<4oW|;`=fySmEZvXjV1VU^!CK-ig@K@MWd6H{EeBqVgRGDNFi$)9C{PE
z7VjSs}!@2s-Am#X+P*q{0a4v^qDcPKCM@0*4-n!P(veY6^Dt+C>18BJ
zxg3viK3t_r7vUWNsN3US5B9=hU0^_2=dEvD$WZ+kVBz
z=$jdUemcHXiWNWLiv28rdpFoK^PnAM(OX4`20IY_7_O7luF=Z(8+6ky9%GwcR?6q(
zUYGr${n&m`QzWlYKR!Qk^X#CIVIm
zNANMuWz!WvU=9LGiwomSc%i!YEjyNMUh~$<(6jn!2QPIlCWPn&Knidqu^yp|LeST4
z)H3SoFVAB;ft{mtEvYa%YBM(u0J4vb8DA<3E%B>C_S0!R{q4ZC$5zqhn~RnJYSKQ!
zrgohpg`${C6M}dqTvDUicA)*pw2d;r^I#;AZt@tPy>r=*Iw;zGRjPD-Rt5oX+e>Mi
zoq;k~DzzXFOM^C%Te@xBwn+BkVo2_6n(f=u8gE{8yUx|Y3&Re}hXY
zdsZa=2edU%D9nE220A-z^OL5LF#l06YqA+~6oMPG*1uj#L~iY2_432C^0-Yf(F3`A
zdfS=!yD{Z`DJsu}uQu4w(ev-YF;9uVD&l^GdTG}IWLCJU!jnEgm5Uzt-c`eICix7-K@
zlAL#orWl&g^}N6kOG@&XoF7ZgB+8!-B;;i_U0?nQ9V*`kKFBuofM&&vS(hgP-xN7&
z4qBtt;A&$zv*Qp0;hm<-W`hVH9fV9ft)5R5GYR5uFeoW22RikPnLC2sV5|wt#2B-A
zL`gfS&INvKQFNLs@hGqR`p?Tt2?+lEIR0S{t#)%|;FNf#pr_=OAQKY)Oxxum1&eOw
zTCPQt9yi;zyKv@E<#aWst9}bBmwwF=1jgS-{P8Z{fhLGo9v}dxg@Vw((RU=-4b}3F
z?cjE!GeqdJ62K+tn9V!Gj0oUu%tMC`86}Gla+o&1de%3=-I)PLwb*`qe;eyS)jVIc}88Y2)?$ql(=NaEH)RrEX(Ee@-4F8E?9JgM{3}DBNl(g0=
zgcqa&hRnT4ITbck6)=+z0BHWF*|6!T{h25D^|m5!oA~Pl)81WOm0@(G;4*mV(llNG
zKGyew!`C2woSuMFLYb6vGIW}*#O@N?yzmm&okCgheO?m?Jj5lUKzarn;AXT_*GmP(
zS9?z`$HiaXo)YV$)*er!0a*;GtOr5|<^TZk0N?4L#8soZ~oDG!WatG9Fd70{ayh+$_Hur+0+dJrdNemiJDaFCA~ZgHQnhh(7F!
ztazCTuH!KXSb>y0+@$%MZ774#@LgE8Jj;v4#gyf`VnZQdV~{O(^3*BA_AI^F`omZg
ztcZzU@a_Kb+La88C^Hd_1cdUl$o_d+o#2`(`Lmm(!2t0O%FcxZfLETd6LuMZE1
zHGgnuX2tA6CLKs;)+y}=Dh~+}K6#DA7e`6I9vm4Z^k==67P|7)tdK)O|IcWy;LW@q
zzVh?dpNJ7i@7)xn)+sX8Wi(qM^4y$FY58bVQo2+ii(2_E5M2oGHjbo|`($?0F$$#W
zmE<7`pdSg45cT58?F7*5-P(`AIJ+Q82wOwaU*wC{o_hNejQyM_qyY%g3%n16=e!`t
zpsvIb###24w^^J}-+h}SCY^6g3%^KNt=ZYj{dp6NXgJhX4E_LEk@-e=K<#L!&x^co~4jrbxzR2)!l~LU1FJdj1I6?|C
z0n7j|2U(#%Q}$0qvR+@wW`Zt0-k1EfsJ?b@@o1$l;`1edBZ6TszWB;B)Z`a^B+N6G
zHU!gSo;T9L{2^NNSB~T{MC@MLFohxAjdwq#_0Prh!V(2Bl>w#hpS%C1Ub@8We3fy?O$%AqG-ycCWi0He0U@)7*s{#I^<
zx=rx$<3~v_36(D5($^@bP+9*$o6vqhtc=j7WPeUzkdJ0;06L91hOX=?gYB1mM<&U>
zF9oVEAXRz1*r(wSzY?kta%0j*k3QLRn+U|c1k`3AJp;qNzZcTrqGCYu6;qU1qS;>n
zVd1l9>*`;ah=O(|6vc--6RE~WYm2x0`mpZSiC}doLP5RFP%VYA+{3Hruryd6gJcxo
zZ(SFsGkcBo(TaNRaDWf`RciFlyTjHye-T&^0>4ZGHtQwV#i73-4M`+0>JF1btxQ@)
zGnWj5dF3BaEh-pr;6a9O?DkMv$Q?y0D8OJ$l7I5^ujBIC*FUi1vyZ%KUteFH%>Lox=AqkwZ(I<^adLf7mP;5o6K==`<0`w?ETkiGlV9}$7%
ze(Sx0gesSzRn5?9%+kA{1hE+%;XgP90pKlBewt;AfX05QS!6jMgBDAM$~qy#W#7MW
zGSMEu0v9OuqK~wNg$0pTq|B}rgBySNp`7}=J&YE#p5R!3=sgFka9!nFC#zc4^=vno
zW!ij9yho307&+wRDzpQAe&W5GjOay_JjtO-a&iW2+u?va1vWCu^wHjg&$_C4NF-JI
zp*VjPN^%FJ=w839q4dt)
zMf3!Tuou`r)ey+D+9Y{dGE@^H4@!}&nBXfJrXGj&Sy7$f$(~aLS|0owC0m{rf)2_K
zQT%jxQc`*xALc=jyvniW<@3o2js~}XXNY$XI~{L7$l4ft<6_mcM(-o=D!!2OJI%(k
z{YE=LX~1*vlB*y97a>G<>8a-hqy{q{M2vAxHSW@r+WM_Rts8bc9*Xvr>;|E~M!HPi
zpIV?xNra}uK%3nNmbg8d5MlscY`{phnJt3b053QO%#17eZ0)bLH{Uv>yXJgz6tHFC
zcwTZ#5P92DOiH=J8gj{(eg$1`tNIya=;f8)*gZ^VS<S^U2V4%
zW;nSSQtk{%jW7|HypKhgRp9UYmtd}nc0=bWC*ZvQ1UZ|Z-y0#b567CHN!H<=GbsY8
z*W;c(9=D5#N7mBf@K>HS#47oG+J*(|yz2F;sU9xtUSajIYRaJOk)&LtT#yA&DS=>;
z3PWK-U^d0t_8f3{K}BRW1i{lQpr;l6mt5iGU;@0Kvn}@thi=&?H*)1`YeLhK`vRW<
zQhi;_i(Yl87J|LiG2-3^z`v%mzd5ENJ0n`E+m8#IK(cq_Sf&H)dDLg@&3tQ}#iHOW
zJ29f7(l3QMolviG1?qo|sNeQFrxruiCBgvitLw}CTWiu@Mi2&v95}Wamj9e8o~YY+
z7Q)UFQITT0iTS|~ezJkx8w_bsqz)_|Y|;Qo71e*R*w#2$%rRj`b^;uA4ln(BGCY;p
z$BAeHUI0{tFC$@zM