Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

让rust程序支持Windows XP RTM(rust程序兼容性问题请统一在此帖回复) #80

Open
2 tasks done
han1548772930 opened this issue May 14, 2024 · 24 comments · Fixed by #124, #126 or #125
Open
2 tasks done

Comments

@han1548772930
Copy link

han1548772930 commented May 14, 2024

背景

rust也放弃了兼容Windows 7或者XP,需要YY-Thunks统一支持。如果您遇到rust相关项目API缺失,请在此帖统一回复。

如果你的程序提示找不到API,请使用YY-Thunk内置的YY.Depends.Analyzer扫描你的应用程序。扫描程序将检查所有缺失的API,并且整理成报告,这样可以极减少遗漏的概率。

当API不存在时往往存在其他API缺失,如果依靠系统的弹窗,这个处理效率是非常堪忧的。

; 比如分析Google浏览器 XP系统上缺失的API,可以输入以下命令,YY.Depends.Analyzer在Release产物中提供。
YY.Depends.Analyzer  "C:\Program Files\Google\Chrome\Application\125.0.6422.113" /IgnoreReady

缺失或者功能不全的函数

Ws2_32.dll

KERNEL32.DLL

  • GetTimeZoneInformationForYear
@mingkuang-Chuyu mingkuang-Chuyu changed the title window xp sp3 中 没有GetTimeZoneInformationForYear方法 使用rust chrono库导致程序在Window XP SP3 中提示找不到GetTimeZoneInformationForYear May 14, 2024
@mingkuang-Chuyu mingkuang-Chuyu added 进度:等待发布 处理完毕,耐心等待新版本。 and removed 进度:正在跟进 labels May 14, 2024
@mingkuang-Chuyu
Copy link
Collaborator

@mingkuang-Chuyu mingkuang-Chuyu added 进度:处理完成 and removed 进度:等待发布 处理完毕,耐心等待新版本。 labels May 14, 2024
@mingkuang-Chuyu mingkuang-Chuyu changed the title 使用rust chrono库导致程序在Window XP SP3 中提示找不到GetTimeZoneInformationForYear 让rust程序支持Windows XP RTM(rust程序API缺失请在此帖回复) May 15, 2024
@Chuyu-Team Chuyu-Team deleted a comment from han1548772930 May 15, 2024
@mingkuang-Chuyu
Copy link
Collaborator

@wapznw
请在这里回复,如果需要进一步处理,至少提供导致失败的API。

@wapznw
Copy link

wapznw commented Jun 21, 2024

好像是mio中调用WSAIoctl传递的SIO_BASE_HANDLE在Windows XP中不支持。
https://github.com/tokio-rs/mio/blob/1a738f9d751eabf4c136e782a423ce4ef2661c81/src/sys/windows/selector.rs#L651

@mingkuang-Chuyu mingkuang-Chuyu changed the title 让rust程序支持Windows XP RTM(rust程序API缺失请在此帖回复) 让rust程序支持Windows XP RTM(rust程序兼容性问题请统一在此帖回复) Jun 22, 2024
@mingkuang-Chuyu
Copy link
Collaborator

好像是mio中调用WSAIoctl传递的SIO_BASE_HANDLE在Windows XP中不支持。 https://github.com/tokio-rs/mio/blob/1a738f9d751eabf4c136e782a423ce4ef2661c81/src/sys/windows/selector.rs#L651

已经更新需求,因为rust的人比较少所以暂时只记录问题(目前也还有很多其他API缺失问题需要处理,人力不足)。

如果你有解决方案也可以提交PR到YY-Thunks。

@huiminghao
Copy link

好像是mio中调用WSAIoctl传递的SIO_BASE_HANDLE在Windows XP中不支持。 https://github.com/tokio-rs/mio/blob/1a738f9d751eabf4c136e782a423ce4ef2661c81/src/sys/windows/selector.rs#L651

mio不支持xp系统。目前也没有人做这方面的移植工作,所以所有与async相关的net io库都不能用。std的应该还是可以用的。

@mingkuang-Chuyu
Copy link
Collaborator

mio不支持xp系统。目前也没有人做这方面的移植工作,所以所有与async相关的net io库都不能用。std的应该还是可以用的。

这些信息我们知道,所以需要做的是在yY-Thunks里重新支持上那些不支持的功能,这样mio就可以正常在XP使用了。

@zhuxiujia
Copy link

zhuxiujia commented Jul 13, 2024

mio不支持xp系统。目前也没有人做这方面的移植工作,所以所有与async相关的net io库都不能用。std的应该还是可以用的。

这些信息我们知道,所以需要做的是在yY-Thunks里重新支持上那些不支持的功能,这样mio就可以正常在XP使用了。

原因:
我检查了 @wapznw 说的问题,直接运行tokio编译的exe 并调用绑定http端口会提示called Result::unwrap()on anErr value: Os { code: 10022, kind: InvalidIn t, message: "提供了一个无效的参数。" } 原因是mio调用的WSAIoctl 操作只支持vista以上。

如何解决:
我强制修改了mio的源码 注释掉所有的WSAIoctl 调用后,在windows server 2003系统(应该和xp同个内核)上正常运行。
YY-Thunk 可以做到 在xp上检测到调用WSAIoctl 则返回Ok(socket)。

副作用:
开始的几次请求没有问题,正常响应,过了一段时间会报错

unexpected error when polling the I/O driver: Os { code: 995, kind: TimedOut,
ssage: "由于线程退出或应用程序请求,已放弃 I/O 操作。" }

@muyu1944
Copy link

@zhuxiujia 注释掉肯定是不行的,wsaloctl相当于是获取一个可异步操作的socket句柄,如果没有这一步,等于是在异步环境中使用一个同步的socket。
你可以试试把mio执行的设置SIO_BASE_HANDLE控制码改为设置FIONBIO

@zhuxiujia
Copy link

@zhuxiujia 注释掉肯定是不行的,wsaloctl相当于是获取一个可异步操作的socket句柄,如果没有这一步,等于是在异步环境中使用一个同步的socket。 你可以试试把mio执行的设置SIO_BASE_HANDLE控制码改为设置FIONBIO

用ioctlsocket 调用FIONBIO 还是不行的,tokio还是会出现 “由于线程退出或应用程序请求,已放弃 I/O 操作。”

@zhuxiujia
Copy link

好像是mio中调用WSAIoctl传递的SIO_BASE_HANDLE在Windows XP中不支持。 https://github.com/tokio-rs/mio/blob/1a738f9d751eabf4c136e782a423ce4ef2661c81/src/sys/windows/selector.rs#L651

应该和SIO_BASE_HANDLE 没关系,在win11上 注释掉SIO_BASE_HANDLE 这些代码直接使用base_socket 也不影响正常运行

@zhuxiujia
Copy link

zhuxiujia commented Aug 20, 2024

我的测试步骤是
1 修改tokio依赖的mio库源码,使其调用SIO_BASE_HANDLE 的时候 直接返回它提供的base socket。(这一步没问题,win11完全正常)
2 用YY-Thunks编译到xp/2003系统上后,发现服务能起来,但是http请求2-3次后 ,http请求就卡死在请求状态没有响应。
3 初步怀疑是mio源码中 IOCP的 GetQueuedCompletionStatusEx 导致,我修改rust源码改成了GetQueuedCompletionStatus 后编译到xp/2003 运行 还是一样http请求2-3次后 ,请求就卡死在请求状态没有响应。win11却是正常的,因此这个原因基本可以排除掉
4 第二次搜索mio的源码,发现以Ex结尾的系统函数中就 怀疑是mio中调用NtCancelIoFileEx 导致(YY-Thunks暴力的直接链接到了NtCancelIoFile),于是反过来验证能否在win11上能复现xp上的卡死情况。 于是我把mio中 NtCancelIoFileEx 改成了 NtCancelIoFile ,发现在win11上也出现了 请求2-3次后 ,http请求就卡死在请求状态没有响应。

结论:
1.SIO_BASE_HANDLE 问题可以直接返回参数传递的base socket 的handle来解决。
2.NtCancelIoFileEx 的替代品 暂时没有找到解决方案,而YY-Thunks链接的 NtCancelIoFile功能又不一样,因为前者是取消未完成I/O操作,后者是取消所有I/O操作(猜测链接到了NtCancelIoFile 取消所有IO操作大概率就是导致请求无响应的元凶)

@stevefan1999-personal
Copy link
Contributor

stevefan1999-personal commented Oct 14, 2024

Related: tokio-rs/mio#735. I'm so close to getting a pure Rust TLS1.3 implementation running on Windows XP with async support...

@stevefan1999-personal
Copy link
Contributor

stevefan1999-personal commented Oct 15, 2024

I made it! I can finally run reqwest on a fresh Windows XP SP3 system out of the box without any kernel patches!

image

#[tokio::main(flavor = "current_thread")]
async fn main() -> Result<(), Error> {
    let _ = rustls_rustcrypto::provider().install_default();
    let _ = tokio::spawn(async move {
        let exec = async {
            Ok::<_, Error>(reqwest::get("https://www.rust-lang.org").await?.text().await?)
        }.await;
        match exec {
            Ok(data) => println!("{data}"),
            Err(e) => println!("{e:?}")
        }
    })
    .await;
    press_btn_continue::wait("Press any key to continue...")?;
    Ok(())
}

Notice how I have to use #[tokio::main(flavor = "current_thread")] here.

This is because NtCancelIoFileEx does this

The NTCancelIoFileEx function allows you to cancel requests in threads other than the calling thread. The NtCancelIoFile function only cancels requests in the same thread that called the NtCancelIoFile function.

(Reference: https://github.com/tongzx/nt5src/blob/daad8a087a4e75422ec96b7911f1df4669989611/Source/XPSP1/NT/base/ntos/io/iomgr/misc.c#L50)

Only those pending operations that were issued by the current thread using
the specified handle are canceled.  Any operations issued for the file by
any other thread or any other process continues normally.

This means if we can pin the cancellation request to the same thread the request was created on, then we can use NtCancelIoFile as an alternative for NtCancelIoFileEx (since there are no other threads requests to cancel).

Keep in mind using current_thread means your runtime is now single-threaded, but you can also use LocalSet for this.

@zhuxiujia
Copy link

@stevefan1999-personal This is works because the entire tokenization has become single threaded, sacrificing performance.

@stevefan1999-personal
Copy link
Contributor

stevefan1999-personal commented Oct 15, 2024

@stevefan1999-personal This is works because the entire tokenization has become single threaded, sacrificing performance.

Yes, and that's how you do it right now:

#[tokio::main]
async fn main() -> Result<(), Error> {
    let _ = rustls_rustcrypto::provider().install_default();

    tokio::task::spawn_blocking(|| {
        Builder::new_current_thread()
            .enable_all()
            .build()
            .unwrap()
            .block_on(async {
                loop {
                    let _ = tokio::spawn(async move {
                        let exec = async {
                            Ok::<_, Error>(
                                reqwest::get("https://www.rust-lang.org")
                                    .await?
                                    .text()
                                    .await?,
                            )
                        }
                        .await;
                        match exec {
                            Ok(data) => println!("{data}"),
                            Err(e) => println!("{e:?}"),
                        }
                    })
                    .await;
                }
            })
    });

    press_btn_continue::wait("Press any key to continue...")?;
    Ok(())
}

We use a spawn_blocking to pin a specific thread for I/O operations, and we still can have other worker threads to do their job. The problem is that mio uses file cancellation to notify other threads for completed job cleanup (marked as deletion), so if any mio cancel operation is used outside of the single-threaded context, it will also break all the event poll ongoing in other worker threads as well which causes a panic.

Also, tokio::spawn will transfer the async closure to different threads -- that's where the problem happens. YYThunk's implementation for NtCancelIoFileEx is very broken but we don't have a choice here -- that thing is a kernel level syscall and cannot be replicated reliably in userspace, meaning something like KernelEx is needed to make it work properly -- not worth the trouble at the moment.

@stevefan1999-personal
Copy link
Contributor

Not sure if we can use APC to work around the limitations of NtCancelIoFile

https://learn.microsoft.com/en-us/windows/win32/sync/asynchronous-procedure-calls

@zhuxiujia
Copy link

zhuxiujia commented Oct 15, 2024

@stevefan1999-personal If you are using Hyper or Axum, it accepts in a thread and then uses TikTok:: spawn to handle it. We know that TikTok:: spawn will submit a task to the runtime, which can be stolen by any worker thread and executed. That is to say, migrating from one thread to another thread。

so....

Why not design an IO management scheduler in MIO to cancel the IO generated by the corresponding thread itself。

The key to implementing a task stealing mechanism similar to SMOL in MIO is to design a system that can track and manage I/O requests across threads, as there is no precise I/O cancellation API like NtCancelIoFileEx on Windows XP. By using a global I/O request table and task migration tracking logic, combined with the use of I/O completion ports, you can achieve cross threaded I/O operation tracking and "simulated cancellation", although the underlying layer cannot accurately cancel specific I/O requests like modern Windows.

@stevefan1999-personal
Copy link
Contributor

Oh it looks like NtCancelIoFileEx accepts an overlapped I/O filtering request and the second parameter for NtCancelIoFile is actually not it.

@zhuxiujia
Copy link

Oh it looks like NtCancelIoFileEx accepts an overlapped I/O filtering request and the second parameter for NtCancelIoFile is actually not it.

Yes, the IO should be cancelled according to LPOVERLAPPED

@stevefan1999-personal
Copy link
Contributor

Oh it looks like NtCancelIoFileEx accepts an overlapped I/O filtering request and the second parameter for NtCancelIoFile is actually not it.

Yes, the IO should be cancelled according to LPOVERLAPPED

Yep...after experimenting I noticed that if I do nothing at all with NtCancelIoFileEx and just return STATUS_NOT_FOUND, that actually fared much better thatn calling NtCancelIoFile. The problem exactly come from the lack of passing of LPOVERLAPPED, so it cancels every outstanding I/O requests on that is on the current calling thread. Using NtCancelIoFile alone is simply cooked.

@stevefan1999-personal
Copy link
Contributor

Ah, god damn it, they have a major redesign of Overlapped I/O in Vista:

Take this throwaway line in the Cancelling Queued Device I/O Requests section of the Asynchronous Device I/O chapter of the latest book: “When a thread dies, the system automatically cancels all I/O requests issued by the thread, except for requests made to handles that have been associated with an I/O completion port.” This is then clarified later in the chapter in a note which points out that prior to Windows Vista if you associated a device with an I/O completion port and then issued overlapped I/O requests on it then you had to make sure that the thread that issued the requests remained alive until the I/O requests had completed. Not anymore! Vista now allows threads to issue overlapped I/O requests and exit and it will still process the requests and queue them to the completion port. This makes perfect sense and will simplify writing general purpose I/O completion port code.

Reference: https://lenholgate.com/blog/2008/02/major-vista-overlapped-io-change.html

So, that's the whole reason NtCancelIoFileEx was introduced exactly since Vista. I guess there is no help that there is no solution to NtCancelIoFileEx at the moment for mio right now unless we do kernel patches and deliberately add that to the syscall, and so you either fall back to single-threaded I/O in Tokio, or threaded I/O without using IOCP. It's simply not worth the trouble I would argue.

@zhuxiujia
Copy link

zhuxiujia commented Oct 16, 2024

Ah, god damn it, they have a major redesign of Overlapped I/O in Vista:

Take this throwaway line in the Cancelling Queued Device I/O Requests section of the Asynchronous Device I/O chapter of the latest book: “When a thread dies, the system automatically cancels all I/O requests issued by the thread, except for requests made to handles that have been associated with an I/O completion port.” This is then clarified later in the chapter in a note which points out that prior to Windows Vista if you associated a device with an I/O completion port and then issued overlapped I/O requests on it then you had to make sure that the thread that issued the requests remained alive until the I/O requests had completed. Not anymore! Vista now allows threads to issue overlapped I/O requests and exit and it will still process the requests and queue them to the completion port. This makes perfect sense and will simplify writing general purpose I/O completion port code.

Reference: https://lenholgate.com/blog/2008/02/major-vista-overlapped-io-change.html

So, that's the whole reason NtCancelIoFileEx was introduced exactly since Vista. I guess there is no help that there is no solution to NtCancelIoFileEx at the moment for mio right now unless we do kernel patches and deliberately add that to the syscall, and so you either fall back to single-threaded I/O in Tokio, or threaded I/O without using IOCP. It's simply not worth the trouble I would argue.

No, if you just return STATUS_NOT_FOUND and do nothing, run axum will be The first accept request io was accepted, but the second accept request io did not respond。

You can try using two browsers open http://127.0.0.1:8000/ As long as one browser is open, the other browser will be blocked
with code

#[tokio::main]
async fn main() {
    // build our application with a route
    let app = Router::new().route("/", get(handler));

    // run it
    let listener = tokio::net::TcpListener::bind("0.0.0.0:8000")
        .await
        .unwrap();
    println!("listening on http://127.0.0.1:8000");
    axum::serve(listener, app).await.unwrap();
}

async fn handler() -> Html<&'static str> {
    Html("<h1>Hello, World!</h1>")
}

@stevefan1999-personal
Copy link
Contributor

stevefan1999-personal commented Oct 22, 2024

Tokio, and by extension mio, should work after the patches above in single threaded mode, sadly but for Tokio to truly work in a multithreaded environment with work stealing, you need at least Windows Vista because there is a fundamental change in overlapped I/O since Vista, specifically that the IO request is no longer bound to the lifetime of the thread, and can continue working after the thread that created it completed, meaning the overlapped I/O operation is independent of the thread, and there is no need for the creator thread to clean it up. If you want to, you can spawn a new thread for the async I/O operation if you don't mind the Send+Sync requirement. Fun fact: Windows 7 is just a Vista reskin and the kernel itself did not have many significant changes after Vista, except Hyper-V, Windows Container, and some confidential computing stuff I think.

@mingkuang-Chuyu
Copy link
Collaborator

@stevefan1999-personal @zhuxiujia
我发布了一个新版本:https://github.com/Chuyu-Team/YY-Thunks/releases/tag/v1.1.5-Beta1

新版本中对SetFileCompletionNotificationModes函数进行了小幅度调整,可能对tokio产生影响。如果可能请帮忙做一些测试,有问题也及时发现。

谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment