Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Snapshot logic has flaws in scenarios where the disk is full #461

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Axlgrep
Copy link

@Axlgrep Axlgrep commented Jul 2, 2024

问题背景

我们的线上服务使用了braft框架, 在运行期间服务所在的机器磁盘使用量达到上限, 查看日志, 在磁盘达到上限的期间有部分关于snapshot的异常日志, 随后我们扩容了磁盘, 并且重启了服务, 但是故障恢复期间在log_manager.cpp内部一直校验失败, 服务无法正常启动.

F20240701 15:34:54.037248 201290 log_manager.cpp:272] Check failed: first_index_kept >= _first_log_index (9355 vs. 10709)
I20240701 15:34:54.053728 201290 server_debug.cpp:219] Catch Signal 6...

原因分析

通过查看本地目录结构, 发现最近一次成功生成的快照目录名称为snapshot_00000000000000009354, 快照的生成时间是20240701 11:11:20.936393, 说明重启之后加载该快照后应该从点位9355处的日志开始进行回放, 但是log中的第一条有效日志的编号却是10709, 说明日志有缺失(缺失的区间[9355, 10709)), 校验确实应该失败, 不允许启动成功.

然后我们去看了最后一次成功生成快照的时间点之后的日志, 发现在20240701 13:11:54.37509720240701 13:35:03.102234这个两个时间点都触发了快照, 并且当时由于磁盘已经满了, 导致产生了部分错误日志(两次快照的日志差不多, 这里截取20240701 13:11:54.375097的日志进行分析)

W20240701 13:11:54.375097   102 util.cpp:322] write falied, err: No space left on device fd: 243 offset: 0 size: 4
W20240701 13:11:54.375156   102 protobuf_file.cpp:61] write len failed, path: /data/shards/***/snapshot/temp/__raft_snapshot_meta.tmp
E20240701 13:11:54.375200   102 snapshot.cpp:74] Fail to save meta to /data/shards/***/snapshot/temp/__raft_snapshot_meta: No space left on device [28]
E20240701 13:11:54.375685   102 snapshot.cpp:276] Fail to sync, path: /data/shards/***/snapshot/temp
I20240701 13:11:54.375737   102 snapshot.cpp:519] Deleting /data/shards/***/snapshot/temp
I20240701 13:11:56.083323   102 snapshot_executor.cpp:234] node ***:127.0.0.1:8100:0 snapshot_save_done, last_included_index=10708 last_included_term=2
I20240701 13:11:56.083458   102 shard_node.h:154] Shard:*** snapshot done ok.
I20240701 13:11:56.083458   102 shard_node.h:154] Shard:tencent-vector-test2_1 snapshot done ok.
I20240701 13:11:56.088521    96 log.cpp:1145] log save_meta /data/shards/***/log/log_meta first_log_index: 10090 time: 5007
I20240701 13:11:56.088872    96 log.cpp:603] Unlinked segment `/data/shards/tencent-vector-test2_1/log/log_00000000000000009352_00000000000000009360'
...
I20240701 13:11:56.103888    96 log.cpp:603] Unlinked segment `/data/shards/tencent-vector-test2_1/log/log_00000000000000010076_00000000000000010083'

从这里可以看出来在快照期间__raft_snapshot_meta存盘操作失败了, 所以后面删除了新生成的temp快照目录(此时最近一次成功快照实际上还是snapshot_00000000000000009354), 但是快照流程还是在继续, 后续清理了区间[9352, 10083]之间的日志并且更新了log_meta文件中的点位信息到10090, 此时快照数据就已经和日志对应不上了(实际上由于日志的清理, 这里已经存在丢失数据的问题了).

函数调用链

在业务层快照逻辑完成之后框架会回调on_snapshot_save_done()函数:

int SnapshotExecutor::on_snapshot_save_done(
    ...
    int ret = st.error_code();
    ...
    if (_snapshot_storage->close(writer) != 0) {
        ret = EIO;
        LOG(WARNING) << "node " << _node->node_id() << " fail to close writer";
    }
    ...
    if (ret == 0) {
        _last_snapshot_index = meta.last_included_index();
        _last_snapshot_term = meta.last_included_term();
        lck.unlock();
        ss << "snapshot_save_done, last_included_index=" << meta.last_included_index()
           << " last_included_term=" << meta.last_included_term();
        LOG(INFO) << ss.str();
        _log_manager->set_snapshot(&meta);
        lck.lock();
    }
    ...
    return ret;
}

其中有一段逻辑是_snapshot_storage调用close进行快照元信息的存盘以及快照目录的rename操作:

int LocalSnapshotStorage::close(SnapshotWriter* writer_base,
                                bool keep_data_on_error) {
    LocalSnapshotWriter* writer = dynamic_cast<LocalSnapshotWriter*>(writer_base);
    int ret = writer->error_code();
    do {
        ...
        ret = writer->sync();
        if (ret != 0) {
            break;
        }
        ...
    } while (0);

    if (ret != 0 && !keep_data_on_error) {
        destroy_snapshot(writer->get_path());
    }
    delete writer;
    return ret != EIO ? 0 : -1;
}

writer->sync()的内部实现实际上就是将快照的元信息进行存盘操作, 在磁盘满导致内部存盘失败的场景下, 这个函数的返回值是-1, 然后跳到函数末尾判断return ret != EIO ? 0 : -1;, 这里实现实际上是有严重问题的, 只有ret的值等于EIO(5)的时候才认为出错了, 如果ret为-1, 实际上close()函数返回了0, 静默了内部错误, 导致上层认为close()函数正常执行, 并且进行了raft 日志的清理工作.

最终导致了实际上快照目录没有更新(一直是最后一次成功快照生成的目录, 也就是20240701 11:11:20.936393时刻生成的snapshot_00000000000000009354), 但是raft日志缺被清理掉了, 并且log_meta的文件中记录的点位一直后移, 造成快照目录数据和raft日志接不上的问题.

修复办法

问题分析清楚之后实际上修复办法还是比较简单的, 当writer->sync()失败之后, 将ret设置成EIO, 将错误向上层抛, 而不是静默错误让上层继续清理braft日志.

…led, causing data to be unrecoverable upon restart
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant