Skip to content

Commit

Permalink
address hopping
Browse files Browse the repository at this point in the history
  • Loading branch information
cnbatch committed Dec 1, 2024
1 parent 54422b6 commit c1d0685
Show file tree
Hide file tree
Showing 21 changed files with 1,277 additions and 569 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,9 +103,10 @@ encryption_algorithm=AES-GCM
| 名称 | 可设置值 | 必填 |备注|
| ---- | ---- | ---- | ---- |
| mode | client<br>server ||客户端<br>服务端|
| listen_on | 域名或 IP 地址 ||只能填写域名或 IP 地址。多个地址请用逗号分隔|
| listen_port | 1 - 65535 ||以服务端运行时可以指定端口范围|
| destination_port | 1 - 65535 ||以客户端运行时可以指定端口范围|
| destination_address | IP地址、域名 ||填入 IPv6 地址时不需要中括号|
| destination_address | IP地址、域名 ||填入 IPv6 地址时不需要中括号。多个地址请用逗号分隔|
| dport_refresh | 20 - 65535 ||单位“秒”。预设值 60 秒,小于20秒按20秒算,大于65535时按65536秒算|
| encryption_algorithm | AES-GCM<br>AES-OCB<br>chacha20<br>xchacha20 ||AES-256-GCM-AEAD<br>AES-256-OCB-AEAD<br>ChaCha20-Poly1305<br>XChaCha20-Poly1305 |
| encryption_password | 任意字符 |视情况|设置了 encryption_algorithm 时必填|
Expand Down
3 changes: 2 additions & 1 deletion README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,10 @@ encryption_algorithm=AES-GCM
| Name | Possible Values | Required | Remarks |
| ---- | ---- | ---- | ---- |
| mode | client<br>server | Yes | Choose between client and server mode |
| listen_on | domain name or IP address |No|domain name / IP address only. Multiple addresses should be comma-separated.|
| listen_port | 1 - 65535 | Yes | Specify the port range when running as a server |
| destination_port | 1 - 65535 | Yes | Specify the port range when running as a client |
| destination_address | IP address, domain name | Yes | When inputting an IPv6 address, no need for square brackets |
| destination_address | IP address, domain name | Yes | When inputting an IPv6 address, no need for square brackets. Multiple addresses should be comma-separated.|
| dport_refresh | 20 - 65535 | No | Unit: seconds. Default value is 60 seconds. If less than 20 seconds, it will be considered as 20 seconds; if greater than 65535, it will be considered as 65536 seconds |
| encryption_algorithm| AES-GCM<br>AES-OCB<br>chacha20<br>xchacha20 | No | Select from AES-256-GCM-AEAD, AES-256-OCB-AEAD, ChaCha20-Poly1305, XChaCha20-Poly1305 |
| encryption_password | Any characters | Depending on situation | Required when setting encryption_algorithm |
Expand Down
40 changes: 39 additions & 1 deletion docs/client_server_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ stun_server=stun.qq.com
log_path=./
```

When using STUN for NAT Hole punching, the server cannot listen on multiple ports and can only use single-port mode. This is because the port number obtained after NAT Hole punching using STUN is not fixed. Even if the server's own port range is continuous, it cannot be guaranteed that the port number range obtained during NAT Hole punching is also continuous. Therefore, in this mode, UDPHop is limited to using only single-port mode.
When using STUN for NAT Hole punching, the server cannot listen on multiple ports and can only use single-port mode; listening multiple address can't be supported. This is because the port number obtained after NAT Hole punching using STUN is not fixed. Even if the server's own port range is continuous, it cannot be guaranteed that the port number range obtained during NAT Hole punching is also continuous. Therefore, in this mode, UDPHop is limited to using only single-port mode.

## Specify the listening NIC

Expand All @@ -100,6 +100,44 @@ Both the client and the server can specify the NIC to listen to, and only need t
listen_on=192.168.1.1
```

or multiple addresses

```
listen_on=192.168.1.1,172.16.20.1
```

## Multiple Destination Addresses

Both client and relay modes can specify multiple destination addresses, which must point to the same server.

```
destination_address=127.0.0.1,::1,10.200.30.1
```

**Note**: When using multiple addresses, it is recommended that the client's `destination_address` matches the server's `listen_on`.

If the server's `listen_on` is not specified, ensure that each address in the client's `destination_address` is in a different network segment.

For example, if the client specifies `destination_address=192.168.0.1,FDCA:1234::1`, the server's `listen_on` can be left blank, since `192.168.0.1` and `FDCA:1234::1` are guaranteed to be in different network segments.

However, if the client specifies `destination_address=192.168.0.1,192.168.0.2,FDCA:1234::1,FDCA:1234::2`, it is better to explicitly specify these addresses in the server's `listen_on` to avoid data packets being sent from unintended addresses.

## Non-continuous Port Range

To use a non-continuous port range, you can separate the ranges with commas.

### Server

```
listen_port=13000-13050,14000-14050,15000
```

### Client

```
destination_port=13000-13050,14000-14050,15000
```

## Multiple Configuration Files

If you want to listen to multiple ports and multiple NICs, you can pass multiple configuration files to kcptube and use them at the same time
Expand Down
39 changes: 38 additions & 1 deletion docs/client_server_zh-hans.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ stun_server=stun.qq.com
log_path=./
```

注意:使用 STUN 打洞时,服务端无法监听多端口,只能使用单端口模式。因为 STUN 打洞后获得的端口号并不固定,即使服务端自己的端口范围是连续的,打洞时无法保证获得的端口号范围也是连续的。因此这种模式下 UDPHop 限制为只能使用单端口模式。
注意:使用 STUN 打洞时,服务端无法侦听多端口,只能使用单端口模式;不支持自定义侦听地址。因为 STUN 打洞后获得的端口号并不固定,即使服务端自己的端口范围是连续的,打洞时无法保证获得的端口号范围也是连续的。因此这种模式下 UDPHop 限制为只能使用单端口模式。

## 指定侦听网卡

Expand All @@ -100,6 +100,43 @@ log_path=./
listen_on=192.168.1.1
```

或者多个地址

```
listen_on=192.168.1.1,172.16.20.1
```

## 多个目标地址

客户端及中继模式都可以指定多个目标地址,这些地址必须指向同一个服务端。

```
destination_address=127.0.0.1,::1,10.200.30.1
```

**备注**:使用多地址时,建议客户端的 `destination_address` 与服务端的 `listen_on` 保持一致。

如果服务端的 `listen_on` 未填写,那么在填写客户端 `destination_address` 时需要确保每个地址都处于不同的网段。

例如,客户端填写 `destination_address=192.168.0.1,FDCA:1234::1`,那么可以不填写服务端的 `listen_on`,因为`192.168.0.1``FDCA:1234::1`必然不是同一个网段。

如果客户端填写 `destination_address=192.168.0.1.192.168.0.2,FDCA:1234::1,FDCA:1234::2`,那么最好在服务端的 `listen_on` 那里指定这几个地址,以免数据包从意想不到的地址发出去。

## 不连续端口范围

若需要使用非连续端口范围,可以使用逗号分隔

### 服务端

```
listen_port=13000-13050,14000-14050,15000
```

### 客户端
```
destination_port=13000-13050,14000-14050,15000
```

## 多个配置文件

如果想要侦听多个端口、多个网卡,那就分开多个配置文件,然后同时使用
Expand Down
112 changes: 69 additions & 43 deletions src/3rd_party/thread_pool.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,6 @@ namespace ttp
return thread_number;
}

[[nodiscard]]
static size_t assign_thread_odd(size_t input_value, concurrency_t thread_count) noexcept
{
static calculate_func calc[2] = { calculate_odd, always_zero };
return (calc[thread_count == 1])(input_value, thread_count);
}

[[nodiscard]]
static size_t calculate_even(size_t input_value, concurrency_t thread_count) noexcept
{
Expand All @@ -59,13 +52,6 @@ namespace ttp
return thread_number;
}

[[nodiscard]]
static size_t assign_thread_even(size_t input_value, concurrency_t thread_count) noexcept
{
static calculate_func calc[2] = { calculate_even, always_zero };
return (calc[thread_count == 1])(input_value, thread_count);
}

[[nodiscard]]
static size_t calculate_assign(size_t input_value, concurrency_t thread_count) noexcept
{
Expand All @@ -74,13 +60,6 @@ namespace ttp
return thread_number;
}

[[nodiscard]]
static size_t assign_thread(size_t input_value, concurrency_t thread_count) noexcept
{
static calculate_func calc[2] = { calculate_assign, always_zero };
return (calc[thread_count == 1])(input_value, thread_count);
}

/**
* @brief A fast, lightweight, and easy-to-use C++17 thread pool class. This is a lighter version of the main thread pool class.
*/
Expand Down Expand Up @@ -401,13 +380,25 @@ namespace ttp
task_group_pool(const concurrency_t thread_count_ = 0) :
thread_count(determine_thread_count(thread_count_)),
threads(std::make_unique<std::thread[]>(thread_count)),
local_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count)),
peer_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count))
listener_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count)),
forwarder_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count))
{
task_queue_of_threads = std::make_unique<task_queue[]>(thread_count);
tasks_total_of_threads = std::make_unique<std::atomic<size_t>[]>(thread_count);
tasks_mutex_of_threads = std::make_unique<std::mutex[]>(thread_count);
task_available_cv = std::make_unique<std::condition_variable[]>(thread_count);
if (thread_count == 1)
{
assign_thread_odd = always_zero;
assign_thread_even = always_zero;
assign_thread = always_zero;
}
else
{
assign_thread_odd = calculate_odd;
assign_thread_even = calculate_even;
assign_thread = calculate_assign;
}

create_threads();
}
Expand Down Expand Up @@ -452,35 +443,35 @@ namespace ttp
}

[[nodiscard]]
size_t get_local_network_task_count_all() const
size_t get_listener_network_task_count_all() const
{
size_t total = 0;
for (size_t i = 0; i < thread_count; ++i)
total += local_network_tasks_total_of_threads[i].load();
total += listener_network_tasks_total_of_threads[i].load();
return total;
}

[[nodiscard]]
size_t get_peer_network_task_count_all() const
size_t get_forwarder_network_task_count_all() const
{
size_t total = 0;
for (size_t i = 0; i < thread_count; ++i)
total += peer_network_tasks_total_of_threads[i].load();
total += forwarder_network_tasks_total_of_threads[i].load();
return total;
}

[[nodiscard]]
size_t get_local_network_task_count(size_t number) const
size_t get_listener_network_task_count(size_t number) const
{
size_t thread_number = assign_thread_odd(number, thread_count);
return local_network_tasks_total_of_threads[thread_number].load();
return listener_network_tasks_total_of_threads[thread_number].load();
}

[[nodiscard]]
size_t get_peer_network_task_count(size_t number) const
size_t get_forwarder_network_task_count(size_t number) const
{
size_t thread_number = assign_thread_even(number, thread_count);
return peer_network_tasks_total_of_threads[thread_number].load();
return forwarder_network_tasks_total_of_threads[thread_number].load();
}

bool thread_id_exists(std::thread::id tid)
Expand All @@ -495,12 +486,28 @@ namespace ttp
*/
void push_task(size_t number, task_void_callback void_task_function)
{
std::unique_ptr<uint8_t[]> data = nullptr;
size_t thread_number = assign_thread(number, thread_count);
{
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
auto task_function = [void_task_function](std::unique_ptr<uint8_t[]> data) { void_task_function(); };
task_queue_of_threads[thread_number].push_back({ task_function, std::move(data) });
task_queue_of_threads[thread_number].push_back({ task_function, std::unique_ptr<uint8_t[]>{} });
++tasks_total_of_threads[thread_number];
}
task_available_cv[thread_number].notify_one();
}

void push_task(std::thread::id tid, task_void_callback void_task_function)
{
size_t thread_number = 0;
if (auto iter = thread_ids.find(tid); iter == thread_ids.end())
thread_number = assign_thread(std::hash<std::thread::id>{}(tid), thread_count);
else
thread_number = iter->second;

{
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
auto task_function = [void_task_function](std::unique_ptr<uint8_t[]> data) { void_task_function(); };
task_queue_of_threads[thread_number].push_back({ task_function, std::unique_ptr<uint8_t[]>{} });
++tasks_total_of_threads[thread_number];
}
task_available_cv[thread_number].notify_one();
Expand All @@ -523,36 +530,52 @@ namespace ttp
task_available_cv[thread_number].notify_one();
}

void push_task_local(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
void push_task(std::thread::id tid, task_callback task_function, std::unique_ptr<uint8_t[]> data)
{
size_t thread_number = 0;
if (auto iter = thread_ids.find(tid); iter == thread_ids.end())
thread_number = assign_thread(std::hash<std::thread::id>{}(tid), thread_count);
else
thread_number = iter->second;

{
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
task_queue_of_threads[thread_number].push_back({ task_function, std::move(data) });
++tasks_total_of_threads[thread_number];
}
task_available_cv[thread_number].notify_one();
}

void push_task_listener(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
{
size_t thread_number = assign_thread_odd(number, thread_count);
{
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
auto task_func = [task_function, this, thread_number](std::unique_ptr<uint8_t[]> data)
{
task_function(std::move(data));
local_network_tasks_total_of_threads[thread_number]--;
listener_network_tasks_total_of_threads[thread_number]--;
};
task_queue_of_threads[thread_number].push_back({ task_func, std::move(data) });
tasks_total_of_threads[thread_number]++;
local_network_tasks_total_of_threads[thread_number]++;
listener_network_tasks_total_of_threads[thread_number]++;
}
task_available_cv[thread_number].notify_one();
}

void push_task_peer(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
void push_task_forwarder(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
{
size_t thread_number = assign_thread_even(number, thread_count);
{
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
auto task_func = [task_function, this, thread_number](std::unique_ptr<uint8_t[]> data)
{
task_function(std::move(data));
peer_network_tasks_total_of_threads[thread_number]--;
forwarder_network_tasks_total_of_threads[thread_number]--;
};
task_queue_of_threads[thread_number].push_back({ task_func, std::move(data) });
tasks_total_of_threads[thread_number]++;
peer_network_tasks_total_of_threads[thread_number]++;
forwarder_network_tasks_total_of_threads[thread_number]++;
}
task_available_cv[thread_number].notify_one();
}
Expand Down Expand Up @@ -682,7 +705,7 @@ namespace ttp
for (concurrency_t i = 0; i < thread_count; ++i)
{
threads[i] = std::thread(&task_group_pool::worker, this, i);
thread_ids.insert(threads[i].get_id());
thread_ids[threads[i].get_id()] = i;
}
}

Expand Down Expand Up @@ -797,9 +820,12 @@ namespace ttp
*/
std::atomic<bool> waiting = false;

std::unique_ptr<std::atomic<size_t>[]> local_network_tasks_total_of_threads;
std::unique_ptr<std::atomic<size_t>[]> peer_network_tasks_total_of_threads;
std::set<std::thread::id> thread_ids;
std::unique_ptr<std::atomic<size_t>[]> listener_network_tasks_total_of_threads;
std::unique_ptr<std::atomic<size_t>[]> forwarder_network_tasks_total_of_threads;
std::map<std::thread::id, size_t> thread_ids;
calculate_func assign_thread_odd;
calculate_func assign_thread_even;
calculate_func assign_thread;
};


Expand Down
Loading

0 comments on commit c1d0685

Please sign in to comment.