This program is totally sound and compiles today. The fact that days does
-not strictly outlive inspector doesn't matter. As long as the inspector
-is alive, so is days.
-
However if we add a destructor, the program will no longer compile!
+
+
このプログラムは今日完全に問題なく、コンパイルすることが出来ます。 days が
+厳密にinspector より長くは生きないということは問題ではありません。
+inspector が生きている限り、 days も生きています。
+
+
しかし、もしデストラクタを追加したら、プログラムはもうコンパイルできないでしょう!
struct Inspector<'a>(&'a u8);
impl<'a> Drop for Inspector<'a> {
fn drop(&mut self) {
+ // 退職からたった{}日でした!
println!("I was only {} days from retirement!", self.0);
}
}
@@ -206,54 +249,88 @@
let (inspector, days);
days = Box::new(1);
inspector = Inspector(&days);
- // Let's say `days` happens to get dropped first.
- // Then when Inspector is dropped, it will try to read free'd memory!
+ // `days` が先にドロップするとしましょう。
+ // すると、 Inspector がドロップされる時、解放されたメモリを読もうとします!
}
<anon>:12:28: 12:32 error: `days` does not live long enough
+(エラー: `days` は十分長生きしません)
<anon>:12 inspector = Inspector(&days);
^~~~
<anon>:9:11: 15:2 note: reference must be valid for the block at 9:10...
+(注釈: 参照は 9:10 にあるブロックに対して有効でなければなりません)
<anon>:9 fn main() {
<anon>:10 let (inspector, days);
<anon>:11 days = Box::new(1);
<anon>:12 inspector = Inspector(&days);
-<anon>:13 // Let's say `days` happens to get dropped first.
-<anon>:14 // Then when Inspector is dropped, it will try to read free'd memory!
+<anon>:13 // `days` が先にドロップするとしましょう。
+<anon>:14 // すると、 Inspector がドロップされる時、解放されたメモリを読もうとします!
...
<anon>:10:27: 15:2 note: ...but borrowed value is only valid for the block suffix following statement 0 at 10:26
+(注釈: ...しかし、借用された値は 10:26 にある、文 0 に続くブロックサフィックスに対してのみ有効です)
<anon>:10 let (inspector, days);
<anon>:11 days = Box::new(1);
<anon>:12 inspector = Inspector(&days);
-<anon>:13 // Let's say `days` happens to get dropped first.
-<anon>:14 // Then when Inspector is dropped, it will try to read free'd memory!
+<anon>:13 // `days` が先にドロップするとしましょう。
+<anon>:14 // すると、 Inspector がドロップされる時、解放されたメモリを読もうとします!
<anon>:15 }
-
Implementing Drop lets the Inspector execute some arbitrary code during its
+
+
Drop トレイトを実装することで、 Inspector が破棄される際に任意のコードを実行するようにできます。
+これは、 Inspector と同じだけ生きる型が、実際にはそれより先に破棄されると Inspector が
+潜在的に認識できることを意味します。
let (inspector, days);
days = Box::new(1);
inspector = Inspector(&days, "gadget");
- // Let's say `days` happens to get dropped first.
- // Even when Inspector is dropped, its destructor will not access the
- // borrowed `days`.
+ // `days` が先にドロップするとしましょう。
+ // Inspector がドロップしたとしても、デストラクタは
+ // 借用された `days` にアクセスしません。
}
-
Likewise, this variant will also never access borrowed data:
+
+
同様に、この変種も、借用されたデータにアクセスしません。
use std::fmt;
struct Inspector<T: fmt::Display>(T, &'static str);
impl<T: fmt::Display> Drop for Inspector<T> {
fn drop(&mut self) {
+ // Inspector(_, {}) はいつ調査を*しない*かを知っています。
println!("Inspector(_, {}) knows when *not* to inspect.", self.1);
}
}
@@ -282,52 +363,104 @@
let (inspector, days): (Inspector<&u8>, Box<u8>);
days = Box::new(1);
inspector = Inspector(&days, "gadget");
- // Let's say `days` happens to get dropped first.
- // Even when Inspector is dropped, its destructor will not access the
- // borrowed `days`.
+ // `days` が先にドロップするとしましょう。
+ // Inspector がドロップしたとしても、デストラクタは
+ // 借用された `days` にアクセスしません
}
-
However, both of the above variants are rejected by the borrow
-checker during the analysis of fn main, saying that days does not
-live long enough.
-
The reason is that the borrow checking analysis of main does not
+
+
しかしながら、上記の両方の変種は、 fn main の分析中に、 days が
+十分長生きしないと言われ、借用チェッカに弾かれるでしょう。
+
+
理由は、 main の借用チェックの際、 借用チェッカは上記 2 つの例の、それぞれの Inspector の Drop の実装の
+内部については知らないからです。借用チェッカが main の分析をしている間、 inspector の
+デストラクタの本体が、借用されたデータにアクセスするかもしれないと借用チェッカが認識しているからです。
このアトリビュートは、ライフタイムや型パラメータにいくつでも適用することが出来ます。
+以下の例では、 'b のライフタイムを持つ参照の先にあるデータにアクセスしないこと、
+そして、 T がムーブやドロップにのみ使用されることを主張します。
+しかし、 'a と U にはこのアトリビュートは省略します。なぜなら、これらのライフタイムや
+型を持つデータに実際にアクセスするからです。
struct Inspector<T>(T, &'static str, Box<for <'r> fn(&'r T) -> String>);
impl<T> Drop for Inspector<T> {
fn drop(&mut self) {
- // The `self.2` call could access a borrow e.g. if `T` is `&'a _`.
+ // 例えば `T` が `&'a _` である場合、`self.2` の呼び出しによって借用されたデータにアクセス出来てしまうでしょう。
+
+ // Inspector({}, {}) はうっかり破棄されたデータにアクセスしてしまいます。
println!("Inspector({}, {}) unwittingly inspects expired data.",
(self.2)(&self.0), self.1);
}
}
-
Here is an example of a trait method call:
+
+
これは、トレイトメソッドの呼び出しの例です。
use std::fmt;
struct Inspector<T: fmt::Display>(T, &'static str);
impl<T: fmt::Display> Drop for Inspector<T> {
fn drop(&mut self) {
- // There is a hidden call to `<T as Display>::fmt` below, which
- // could access a borrow e.g. if `T` is `&'a _`
+ // 以下のコードには隠れている、 `<T as Display>::fmt` の呼び出しがあります。
+ // もし `T` が `&'a _` である場合、この呼び出しで借用されたデータにアクセス出来てしまうでしょう。
+
+ // Inspector({}, {}) はうっかり破棄されたデータにアクセスしてしまいます。
println!("Inspector({}, {}) unwittingly inspects expired data.",
self.0, self.1);
}
}
-
And of course, all of these accesses could be further hidden within
+
+
This program is totally sound and compiles today. The fact that days does
-not strictly outlive inspector doesn't matter. As long as the inspector
-is alive, so is days.
-
However if we add a destructor, the program will no longer compile!
+
+
このプログラムは今日完全に問題なく、コンパイルすることが出来ます。 days が
+厳密にinspector より長くは生きないということは問題ではありません。
+inspector が生きている限り、 days も生きています。
+
+
しかし、もしデストラクタを追加したら、プログラムはもうコンパイルできないでしょう!
struct Inspector<'a>(&'a u8);
impl<'a> Drop for Inspector<'a> {
fn drop(&mut self) {
+ // 退職からたった{}日でした!
println!("I was only {} days from retirement!", self.0);
}
}
@@ -2613,54 +2656,88 @@
let (inspector, days);
days = Box::new(1);
inspector = Inspector(&days);
- // Let's say `days` happens to get dropped first.
- // Then when Inspector is dropped, it will try to read free'd memory!
+ // `days` が先にドロップするとしましょう。
+ // すると、 Inspector がドロップされる時、解放されたメモリを読もうとします!
}
<anon>:12:28: 12:32 error: `days` does not live long enough
+(エラー: `days` は十分長生きしません)
<anon>:12 inspector = Inspector(&days);
^~~~
<anon>:9:11: 15:2 note: reference must be valid for the block at 9:10...
+(注釈: 参照は 9:10 にあるブロックに対して有効でなければなりません)
<anon>:9 fn main() {
<anon>:10 let (inspector, days);
<anon>:11 days = Box::new(1);
<anon>:12 inspector = Inspector(&days);
-<anon>:13 // Let's say `days` happens to get dropped first.
-<anon>:14 // Then when Inspector is dropped, it will try to read free'd memory!
+<anon>:13 // `days` が先にドロップするとしましょう。
+<anon>:14 // すると、 Inspector がドロップされる時、解放されたメモリを読もうとします!
...
<anon>:10:27: 15:2 note: ...but borrowed value is only valid for the block suffix following statement 0 at 10:26
+(注釈: ...しかし、借用された値は 10:26 にある、文 0 に続くブロックサフィックスに対してのみ有効です)
<anon>:10 let (inspector, days);
<anon>:11 days = Box::new(1);
<anon>:12 inspector = Inspector(&days);
-<anon>:13 // Let's say `days` happens to get dropped first.
-<anon>:14 // Then when Inspector is dropped, it will try to read free'd memory!
+<anon>:13 // `days` が先にドロップするとしましょう。
+<anon>:14 // すると、 Inspector がドロップされる時、解放されたメモリを読もうとします!
<anon>:15 }
-
Implementing Drop lets the Inspector execute some arbitrary code during its
+
+
Drop トレイトを実装することで、 Inspector が破棄される際に任意のコードを実行するようにできます。
+これは、 Inspector と同じだけ生きる型が、実際にはそれより先に破棄されると Inspector が
+潜在的に認識できることを意味します。
let (inspector, days);
days = Box::new(1);
inspector = Inspector(&days, "gadget");
- // Let's say `days` happens to get dropped first.
- // Even when Inspector is dropped, its destructor will not access the
- // borrowed `days`.
+ // `days` が先にドロップするとしましょう。
+ // Inspector がドロップしたとしても、デストラクタは
+ // 借用された `days` にアクセスしません。
}
-
Likewise, this variant will also never access borrowed data:
+
+
同様に、この変種も、借用されたデータにアクセスしません。
use std::fmt;
struct Inspector<T: fmt::Display>(T, &'static str);
impl<T: fmt::Display> Drop for Inspector<T> {
fn drop(&mut self) {
+ // Inspector(_, {}) はいつ調査を*しない*かを知っています。
println!("Inspector(_, {}) knows when *not* to inspect.", self.1);
}
}
@@ -2689,52 +2770,104 @@
let (inspector, days): (Inspector<&u8>, Box<u8>);
days = Box::new(1);
inspector = Inspector(&days, "gadget");
- // Let's say `days` happens to get dropped first.
- // Even when Inspector is dropped, its destructor will not access the
- // borrowed `days`.
+ // `days` が先にドロップするとしましょう。
+ // Inspector がドロップしたとしても、デストラクタは
+ // 借用された `days` にアクセスしません
}
-
However, both of the above variants are rejected by the borrow
-checker during the analysis of fn main, saying that days does not
-live long enough.
-
The reason is that the borrow checking analysis of main does not
+
+
しかしながら、上記の両方の変種は、 fn main の分析中に、 days が
+十分長生きしないと言われ、借用チェッカに弾かれるでしょう。
+
+
理由は、 main の借用チェックの際、 借用チェッカは上記 2 つの例の、それぞれの Inspector の Drop の実装の
+内部については知らないからです。借用チェッカが main の分析をしている間、 inspector の
+デストラクタの本体が、借用されたデータにアクセスするかもしれないと借用チェッカが認識しているからです。
このアトリビュートは、ライフタイムや型パラメータにいくつでも適用することが出来ます。
+以下の例では、 'b のライフタイムを持つ参照の先にあるデータにアクセスしないこと、
+そして、 T がムーブやドロップにのみ使用されることを主張します。
+しかし、 'a と U にはこのアトリビュートは省略します。なぜなら、これらのライフタイムや
+型を持つデータに実際にアクセスするからです。
struct Inspector<T>(T, &'static str, Box<for <'r> fn(&'r T) -> String>);
impl<T> Drop for Inspector<T> {
fn drop(&mut self) {
- // The `self.2` call could access a borrow e.g. if `T` is `&'a _`.
+ // 例えば `T` が `&'a _` である場合、`self.2` の呼び出しによって借用されたデータにアクセス出来てしまうでしょう。
+
+ // Inspector({}, {}) はうっかり破棄されたデータにアクセスしてしまいます。
println!("Inspector({}, {}) unwittingly inspects expired data.",
(self.2)(&self.0), self.1);
}
}
-
Here is an example of a trait method call:
+
+
これは、トレイトメソッドの呼び出しの例です。
use std::fmt;
struct Inspector<T: fmt::Display>(T, &'static str);
impl<T: fmt::Display> Drop for Inspector<T> {
fn drop(&mut self) {
- // There is a hidden call to `<T as Display>::fmt` below, which
- // could access a borrow e.g. if `T` is `&'a _`
+ // 以下のコードには隠れている、 `<T as Display>::fmt` の呼び出しがあります。
+ // もし `T` が `&'a _` である場合、この呼び出しで借用されたデータにアクセス出来てしまうでしょう。
+
+ // Inspector({}, {}) はうっかり破棄されたデータにアクセスしてしまいます。
println!("Inspector({}, {}) unwittingly inspects expired data.",
self.0, self.1);
}
}
-
And of course, all of these accesses could be further hidden within
+
+
When working with unsafe code, we can often end up in a situation where
types or lifetimes are logically associated with a struct, but not actually
diff --git a/docs/races.html b/docs/races.html
index c1dadd9..71d2c10 100644
--- a/docs/races.html
+++ b/docs/races.html
@@ -81,7 +81,7 @@
diff --git a/docs/references.html b/docs/references.html
index 0e3e60c..4ae5413 100644
--- a/docs/references.html
+++ b/docs/references.html
@@ -81,7 +81,7 @@
diff --git a/docs/repr-rust.html b/docs/repr-rust.html
index 1866b99..0906c5e 100644
--- a/docs/repr-rust.html
+++ b/docs/repr-rust.html
@@ -81,7 +81,7 @@
diff --git a/docs/safe-unsafe-meaning.html b/docs/safe-unsafe-meaning.html
index 3f0938f..dd5ee9e 100644
--- a/docs/safe-unsafe-meaning.html
+++ b/docs/safe-unsafe-meaning.html
@@ -81,7 +81,7 @@
diff --git a/docs/searchindex.js b/docs/searchindex.js
index 818b3e5..28e7e25 100644
--- a/docs/searchindex.js
+++ b/docs/searchindex.js
@@ -1 +1 @@
-Object.assign(window.search, {"doc_urls":["index.html#rust-裏本","index.html#note-この文書はドラフトです重大な間違いを含んでいるかもしれません","meet-safe-and-unsafe.html#安全と危険のご紹介","safe-unsafe-meaning.html#安全と危険の相互作用","working-with-unsafe.html#unsafe-と連携する","data.html#rust-のデータ表現","repr-rust.html#reprrust","exotic-sizes.html#奇妙なサイズの型","exotic-sizes.html#動的サイズの型dst-dynamically-sized-type","exotic-sizes.html#サイズが-0-の型zst-zero-sized-type","exotic-sizes.html#空の型","other-reprs.html#代替メモリレイアウト","other-reprs.html#reprc","other-reprs.html#repru8-repru16-repru32-repru64","other-reprs.html#reprpacked","ownership.html#所有権とライフタイム","references.html#参照","references.html#パス","references.html#生存性","references.html#別名付け","lifetimes.html#ライフタイム","lifetimes.html#例参照先より長く生きる参照","lifetimes.html#例可変参照の別名付け","lifetime-mismatch.html#ライフタイムシステムの限界","lifetime-elision.html#ライフタイムの省略","unbounded-lifetimes.html#無制限のライフタイム","hrtb.html#高階トレイト境界","subtyping.html#subtyping-and-variance","subtyping.html#variance","dropck.html#drop-check","dropck.html#an-escape-hatch","dropck.html#is-that-all-about-drop-checker","phantom-data.html#phantomdata","borrow-splitting.html#splitting-borrows","conversions.html#type-conversions","coercions.html#coercions","dot-operator.html#the-dot-operator","casts.html#casts","transmutes.html#transmutes","uninitialized.html#working-with-uninitialized-memory","checked-uninit.html#checked-uninitialized-memory","drop-flags.html#drop-flags","unchecked-uninit.html#unchecked-uninitialized-memory","obrm.html#所有権に基づいたリソース管理ownership-based-resource-management-obrmの危険性について","constructors.html#constructors","destructors.html#destructors","leaking.html#leaking","leaking.html#drain","leaking.html#rc","leaking.html#threadscopedjoinguard","unwinding.html#unwinding","exception-safety.html#exception-safety","exception-safety.html#vecpush_all","exception-safety.html#binaryheapsift_up","poisoning.html#poisoning","concurrency.html#concurrency-and-parallelism","races.html#data-races-and-race-conditions","send-and-sync.html#send-and-sync","atomics.html#atomics","atomics.html#compiler-reordering","atomics.html#hardware-reordering","atomics.html#data-accesses","atomics.html#sequentially-consistent","atomics.html#acquire-release","atomics.html#relaxed","vec.html#example-implementing-vec","vec-layout.html#layout","vec-alloc.html#allocating-memory","vec-push-pop.html#push-and-pop","vec-dealloc.html#deallocating","vec-deref.html#deref","vec-insert-remove.html#insert-and-remove","vec-into-iter.html#intoiter","vec-raw.html#rawvec","vec-drain.html#drain","vec-zsts.html#handling-zero-sized-types","vec-zsts.html#allocating-zero-sized-types","vec-zsts.html#iterating-zero-sized-types","vec-final.html#the-final-code","arc-and-mutex.html#implementing-arc-and-mutex"],"index":{"documentStore":{"docInfo":{"0":{"body":2,"breadcrumbs":1,"title":1},"1":{"body":9,"breadcrumbs":1,"title":1},"10":{"body":35,"breadcrumbs":0,"title":0},"11":{"body":1,"breadcrumbs":0,"title":0},"12":{"body":32,"breadcrumbs":1,"title":1},"13":{"body":12,"breadcrumbs":4,"title":4},"14":{"body":10,"breadcrumbs":1,"title":1},"15":{"body":41,"breadcrumbs":0,"title":0},"16":{"body":4,"breadcrumbs":0,"title":0},"17":{"body":24,"breadcrumbs":0,"title":0},"18":{"body":52,"breadcrumbs":0,"title":0},"19":{"body":7,"breadcrumbs":0,"title":0},"2":{"body":46,"breadcrumbs":0,"title":0},"20":{"body":63,"breadcrumbs":0,"title":0},"21":{"body":61,"breadcrumbs":0,"title":0},"22":{"body":63,"breadcrumbs":0,"title":0},"23":{"body":119,"breadcrumbs":0,"title":0},"24":{"body":97,"breadcrumbs":0,"title":0},"25":{"body":12,"breadcrumbs":0,"title":0},"26":{"body":102,"breadcrumbs":0,"title":0},"27":{"body":108,"breadcrumbs":2,"title":2},"28":{"body":672,"breadcrumbs":1,"title":1},"29":{"body":525,"breadcrumbs":2,"title":2},"3":{"body":96,"breadcrumbs":0,"title":0},"30":{"body":312,"breadcrumbs":2,"title":2},"31":{"body":22,"breadcrumbs":2,"title":2},"32":{"body":249,"breadcrumbs":1,"title":1},"33":{"body":702,"breadcrumbs":2,"title":2},"34":{"body":101,"breadcrumbs":2,"title":2},"35":{"body":245,"breadcrumbs":3,"title":1},"36":{"body":25,"breadcrumbs":4,"title":2},"37":{"body":282,"breadcrumbs":3,"title":1},"38":{"body":149,"breadcrumbs":3,"title":1},"39":{"body":42,"breadcrumbs":3,"title":3},"4":{"body":85,"breadcrumbs":1,"title":1},"40":{"body":242,"breadcrumbs":5,"title":3},"41":{"body":281,"breadcrumbs":4,"title":2},"42":{"body":351,"breadcrumbs":5,"title":3},"43":{"body":8,"breadcrumbs":5,"title":5},"44":{"body":213,"breadcrumbs":1,"title":1},"45":{"body":415,"breadcrumbs":1,"title":1},"46":{"body":239,"breadcrumbs":1,"title":1},"47":{"body":200,"breadcrumbs":1,"title":1},"48":{"body":159,"breadcrumbs":1,"title":1},"49":{"body":226,"breadcrumbs":1,"title":1},"5":{"body":1,"breadcrumbs":1,"title":1},"50":{"body":37,"breadcrumbs":1,"title":1},"51":{"body":169,"breadcrumbs":3,"title":2},"52":{"body":106,"breadcrumbs":2,"title":1},"53":{"body":369,"breadcrumbs":2,"title":1},"54":{"body":16,"breadcrumbs":2,"title":1},"55":{"body":70,"breadcrumbs":2,"title":2},"56":{"body":282,"breadcrumbs":5,"title":4},"57":{"body":323,"breadcrumbs":3,"title":2},"58":{"body":100,"breadcrumbs":2,"title":1},"59":{"body":79,"breadcrumbs":3,"title":2},"6":{"body":132,"breadcrumbs":1,"title":1},"60":{"body":229,"breadcrumbs":3,"title":2},"61":{"body":179,"breadcrumbs":3,"title":2},"62":{"body":120,"breadcrumbs":3,"title":2},"63":{"body":157,"breadcrumbs":3,"title":2},"64":{"body":74,"breadcrumbs":2,"title":1},"65":{"body":89,"breadcrumbs":3,"title":3},"66":{"body":269,"breadcrumbs":3,"title":1},"67":{"body":897,"breadcrumbs":4,"title":2},"68":{"body":171,"breadcrumbs":4,"title":2},"69":{"body":81,"breadcrumbs":3,"title":1},"7":{"body":1,"breadcrumbs":0,"title":0},"70":{"body":99,"breadcrumbs":3,"title":1},"71":{"body":138,"breadcrumbs":4,"title":2},"72":{"body":334,"breadcrumbs":3,"title":1},"73":{"body":262,"breadcrumbs":3,"title":1},"74":{"body":328,"breadcrumbs":3,"title":1},"75":{"body":65,"breadcrumbs":6,"title":4},"76":{"body":206,"breadcrumbs":6,"title":4},"77":{"body":149,"breadcrumbs":6,"title":4},"78":{"body":540,"breadcrumbs":4,"title":2},"79":{"body":24,"breadcrumbs":3,"title":3},"8":{"body":22,"breadcrumbs":4,"title":4},"9":{"body":48,"breadcrumbs":5,"title":5}},"docs":{"0":{"body":"高度で危険な Rust Programming のための闇の技法","breadcrumbs":"Rust 裏本","id":"0","title":"Rust 裏本"},"1":{"body":"私に与えられたのは、望んだようなプログラムではなく、身を震わせるような暗黒と言い表せないような孤独であった。そして私はついに、誰ひとり口にしようともしなかった恐ろしい真実、ささやくことすらできない神秘中の神秘を目にしたのだ。石のように硬く、耳障りな音をたてるこの言語は、ロンドンが古きロンドンではなく、パリが古きパリではないように、Rust の御代をとこしえにするものではなく、実はきわめて危険で、不完全に防腐処理された、だらしなく寝そべった死体だったのだ。そこにはコンパイル時に生まれた奇妙な生き物たちが所在なさげに蔓延っていた。 (訳注: H.P. ラヴクラフトの小説「 あの男 」のパロディのようです。) この本は、危険な Rust プログラムを正しく書くために理解しなくてはいけない、不愉快な詳細について詳しく見ていきます。このような性質上、この文書は今まで語られることのなかった恐怖を解き放ち、あなたの精神を何十億もの絶望のかけらに砕いてしまうかもしれません。 もし貴方が Rust とともに長く幸せな人生を歩みたいと望むなら、今すぐに背を向けて、この本を見てしまったことを忘れるのです。貴方には必要ないのですから。しかし、危険なコードを書く意思がもしも貴方にあるのなら、もしくはこの言語の最重要部にただ踏み込んでみたいのなら、この本は代えがたい情報をもたらすでしょう。 The Book とは異なり、ここでは多くの事前知識を前提としています。特に基本的なシステムプログラミングと Rust に精通していなくてはなりません。もし貴方がそうでないなら、まず The Book を読む べきでしょう。とはいえ、The Book は前提ではありませんし、適切な場面で基本知識を復習する機会を与えます。The Book を飛ばしてこの本を読んでも構いませんが、すべてが基礎から説明されるわけではないことを覚えておいてください。 はっきり言いますが、この本は詳細について深く説明します。例外の安全性、ポインタエイリアシング、メモリモデル、そして型理論についても少し。また、様々な種類の安全性や保証についてもたくさん説明します。 訳注: 原文は こちら 、日本語の翻訳文書は こちら です。","breadcrumbs":"NOTE: この文書はドラフトです。重大な間違いを含んでいるかもしれません。","id":"1","title":"NOTE: この文書はドラフトです。重大な間違いを含んでいるかもしれません。"},"10":{"body":"Rust では、 インスタンスを生成できない 型を宣言することもできます。 こういう型は、型レベルの話にのみ出てきて、値レベルには出てきません。 空の型は、識別子を持たない enum として宣言できます。 enum Void {} // 識別子なし = 空 空の型は、ZST よりもまれにしか使いません。 空の型がもっとも必要になる例としては、型レベルの到達不可能性を示す時です。 例えば、ある API は、一般に Result を返す必要がありますが、 特定のケースでは絶対に失敗しないことがわかっているとします。 Result を返すことで、この事実を型レベルで伝えることが可能です。 Void 型の値を提供することはできないので、この Result は Err に なり得ないと静的にわかります 。 そのため、この API の利用者は、自信を持って Result をアンラップすることができます。 原理的に、Rust ではこの事実をもとに、興味深い解析と最適化が可能です。 たとえば、Result は Err にはなり得ないので、 T と表現することができます。以下のコードがコンパイルに通るようにも できる でしょう。 enum Void {} let res: Result = Ok(0); // Err は存在しないので、Ok になることに疑問の余地はありません。\nlet Ok(num) = res; ただし、どちらの例も現時点では動きません。 つまり、Void 型による利点は、静的な解析によって、特定の状況が起こらないと確実に言えることだけです。 最後に細かいことを一つ。空の型を指す生ポインタを構成することは有効ですが、 それを参照外しすることは、意味がないので、未定義の挙動となります。 つまり、C における void * と同じような意味で *const Void を使うこと出来ますが、 これは、 安全に 参照外しできる型(例えば *const ())と比べて何も利点はありません。","breadcrumbs":"データレイアウト » 空の型","id":"10","title":"空の型"},"11":{"body":"Rust では、デフォルトとは異なる、代替のデータレイアウトを指定することができます。","breadcrumbs":"データレイアウト » 代替メモリレイアウト","id":"11","title":"代替メモリレイアウト"},"12":{"body":"これは最も重要な repr です。意味はとても単純で、「C がやるようにやれ」です。 フィールドの順序、サイズ、アラインメントが、C や C++ に期待するのと全く同じになります。 FFI 境界を超えるであろう型は、すべて repr(C) になるべきです。 C はプログラミング界の共通言語なのですから。 また、値を別の型として再解釈する、といった複雑なトリックをやる場合にも repr(C) は必須です。 しかし、Rust の風変わりなデータレイアウト機能との相互作用も忘れてはいけません。 「FFI のため」と「データレイアウトのため」という二つの目的があるため、 FFI 境界を超えることが無意味または問題になるような型にも repr(C) は適用されます。 ZST のサイズはやはり 0 になります。これは C の標準的な挙動ではないし、C++ の挙動 (空の型も 1 byte を消費します)とは明確に異なります。 DST, タプル, タグ付き共用体という概念は C には存在しないため、FFI では安全に使えません。 repr(C) を適用した状況では、タプルは構造体と似ています。構造体との違いは、フィールドに名前がないことだけです。 型に drop flags が付いていても、その型は追加されます。 enum については、repr(u*) (次のセクションで説明します)と同等です。選んだサイズが、対象となるプラットフォームの C ABI でのデフォルトの enum のサイズとなります。C での enum のデータ表現は実装依存なので、これはベストの推測でしかないことに注意してください。とくに、対象の C コードが特定のフラグつきでコンパイルされた場合に、正しく動かないかもしれません。","breadcrumbs":"データレイアウト » repr(C)","id":"12","title":"repr(C)"},"13":{"body":"これらは、enum を C っぽくレイアウトするように指示します。 enum の要素が指定した整数をオーバーフローする場合、コンパイルエラーとなります。 オーバーフローする値を 0 に設定するよう Rust に指定することもできますが、 2 つの異なる enum 要素が同じ値を取ることはできません。 C っぽくない enum (訳注:要素がパラメータをとるような enum)に repr(u*) を適用すると、 ヌルポインタ最適化のようなある種の最適化ができなくなります。 この repr を構造体につかっても効果はありません。","breadcrumbs":"データレイアウト » repr(u8), repr(u16), repr(u32), repr(u64)","id":"13","title":"repr(u8), repr(u16), repr(u32), repr(u64)"},"14":{"body":"repr(packed) を使うと Rust はパディングを一切取り除き、すべてをバイト単位にアラインします。 メモリ使用量は改善しますが、悪い副作用を引き起こす可能性が高いです。 特にほとんどのアーキテクチャは、値がアラインされていることを 強く 望んでいます。 つまりアラインされていないデータの読み込みにはペナルティがある(x86)かもしれませんし、 失敗する(いくつかの ARM チップ)かもしれません。 パックされたフィールドを直接読んだり書いたりするという単純なケースでは、 コンパイラがシフトやマスクを駆使してアラインメントの問題を隠してくれるかもしれません。 しかし、パックされたフィールドへの参照を扱う場合には、アラインされてない読み込みを避けるような コードをコンパイラが生成することは期待できないでしょう。 Rust 1.0 時点では、これは未定義な挙動です。 repr(packed) は気軽に使えるものではありません。 極端な要求に応えようとしているのでない限り、使うべきではありません。 この repr は repr(C) や repr(rust) の修飾子として使えます。","breadcrumbs":"データレイアウト » repr(packed)","id":"14","title":"repr(packed)"},"15":{"body":"所有権は Rust が爆発的に有名になるきっかけとなった機能です。 所有権により、Rust は完全にメモリ安全かつ、ガーベジコレクションがないため効率的になります。 所有権の詳細に立ち入る前に、この機能がなぜ必要なのかを考えてみましょう。 ガーベジコレクション(GC)が常に最適なソリューションではないこと、 手動のメモリ管理の方が望ましいケースもあることには異論はないと思います。 もしそう思わないなら、別の言語に興味を持った方が良いですよ? あなたが GC のことをどう思っていようとも、GC はコードを安全にするために とてつもない 恩恵をもたらしました。 オブジェクトが 早すぎるタイミングで 消えてしまう心配が全く必要ないんです。 (とはいえ、そのオブジェクトへのポインタをその時点まで保有しておくべきかどうかというのは別の問題ですが・・・) これは、C や C++ プログラムが対処しなければならない、広範囲に広がっている問題です。 GC の無い言語を使ったことのあるひとなら誰でも一度はやってしまった、この単純な間違いを見てみましょう。 fn as_str(data: &u32) -> &str { // 文字列を生成する let s = format!(\"{}\", data); // しまった! この関数内でしか存在しないオブジェクトへの // 参照を返してしまった! // ダングリングポインタだ! メモリ解放後の参照だ! うわーー! // (このコードは Rust ではコンパイルエラーになります) &s\n} これこそが、Rust の所有権システムが解決する問題なのです。 Rust は &s が生存するスコープを理解し、&s がそのスコープ外に逃げることを防ぎます。 しかし、この単純なケースは、C コンパイラですらうまいこと防ぐことができるでしょう。 コードが大きくなり、様々な関数にポインタが渡されるようになると、やっかいなことになります。 いずれ C コンパイラは、十分なエスケープ解析ができなくなり、コードが健全である証明に失敗し、屈服することになるのです。 結果的に、C コンパイラはあなたのプログラムが正しいと仮定して、それを受け入れることを強制されます。 これは Rust では決して起こりません。全てが健全であるとコンパイラに証明するのはプログラマの責任なのです。 もちろん、参照が参照先のスコープから逃げ出していないことを検証することよりも 所有権に関する Rust の話はもっともっと複雑です。 ポインタが常に有効であることを証明するのは、もっともっと複雑だからです。 例えばこのコードを見てみましょう。 let mut data = vec![1, 2, 3];\n// 内部データの参照を取る\nlet x = &data[0]; // しまった! `push` によって `data` の格納先が再割り当てされてしまった。\n// ダングリングポインタだ! メモリ解放後の参照だ! うわーー!\n// (このコードは Rust ではコンパイルエラーになります)\ndata.push(4); println!(\"{}\", x); 単純なスコープ解析では、このバグは防げません。 data のライフタイムは十分に長いからです。 問題は、その参照を保持している間に、参照先が 変わって しまったことです。 Rust で参照を取ると、参照先とその所有者がフリーズされるのは、こういう理由なのです。","breadcrumbs":"所有権とライフタイム","id":"15","title":"所有権とライフタイム"},"16":{"body":"このセクションでは、 すべての Rust プログラムが満たさなくてはならないメモリモデルを ざっくりと見ていきます。 安全なコードは、借用チェッカによってこのモデルを満たしていることが静的に検証されます。 アンセーフなコードは、借用チェッカの裏をかくかもしれませんが、このモデルを満たします。 この基本的なモデルを満たしている限り、より多くのプログラムがコンパイルに通るように 借用チェッカを拡張することも可能です。 参照には 2 種類あります。 共有参照: & 可変参照: &mut 参照は次のルールに従います。 参照のライフタイムが、参照先のライフタイムより長くなることはできません。 可変参照は、別名を持つことができません。 これだけです。これがモデルの全てです。 もちろん、 別名を持つ とはどういうことかを定義するべきでしょう。 別名を定義するには、 パス と 生存 という概念を定義しなくてはなりません。 これから説明するモデルは疑わしく、問題があるという点に、多くの人が同意しています。 直感的なモデルとして使うにはたぶん大丈夫ですが、望むような意味論を捉えることはできないでしょう。 ここではその点にこだわらず、のちの節で使うための概念を紹介することにします。 将来的にはこの構成は大きく変わるでしょう。TODO: 構成を変える。","breadcrumbs":"所有権とライフタイム » 参照","id":"16","title":"参照"},"17":{"body":"もし、Rust が扱うのが値だけ(ポインタはない)であれば、 すべての値はただ一つの変数か複合型に所有されることになります。 ここから所有権の 木構造 が自然に導かれます。 スタック自身が木のルートになり、変数が直接の子になります。 変数がフィールドを持つのであれば、それは変数の直接の子になるでしょう。 このように見ると、Rust におけるすべての値は、所有権を表す木構造の パス を持つことになります。 特に重要なのは、 先祖 と 子孫 です。x が y が所有しているとき、x は y の先祖で、 y は x の子孫です。この関係は内包的であることに注意してください。 x はそれ自身の先祖であり子孫です。 参照は、単純にパスの 名前 と定義できます。 参照を作成するということは、あるメモリアドレスに所有権の パスが存在することを宣言するということです。 悲惨なことに、スタックに存在しないデータはたくさんあり、この点も考慮しなくてはいけません。 グローバル変数やスレッドローカル変数は、単純にスタックの底に存在すると考えることができます。 (ただし、可変なグローバル変数に注意が必要です)。 ヒープにあるデータは別の問題を提起します。 もし、ヒープにある各データが、スタック上のただ一つのポインタに所有されているのだとすると、 そういうポインタを、ヒープ上の値を所有する構造体だと解釈すればよいだけです。 ヒープ上のデータを独占的に所有する型の例としては、Box, Vec, String, HashMap があります。 残念ながら、ヒープ上のデータは 常に 独占的に所有されているわけではありません。 例えば Rc によって、 共有 所有権という概念がでてきます。 値が共有所有されると、その値への一意なパスが存在しないことになります。 一意なパスが存在しない値によって、いろいろな制約が発生します。 一般に、一意ではないパスを参照できるのは、共有参照だけです。 しかし、相互排他を保証するメカニズムがあれば、一時的にその値(とそしてすべての子ども)への唯一のパスを確立し、 「唯一の真の所有者」を確立できるかもしれません。 もしこれが可能なら、その値を変更できるかもしれません。 とくに、可変参照を取ることができるようになります。 そのようなパスを確立するために、Rust で標準的に使われる 継承可変性 ではなく、 内部可変性 がよく使われます。 内部可変性を持った型の例としては、Cell, RefCell, Mutex, RWLock があります。 これらの型は、実行時の制約を用いて、排他的アクセスを提供します。 この効果を使った興味深い例が Rc 自身です。もし Rc の参照カウントが 1 なら、 内部状態を変更したり、ムーブしたりしても安全です。 参照カウント自体も内部可変性を使っています。 変数や構造体のフィールドに内部可変性があることを型システムに正しく伝えるには、 UnsafeCell を使います。 UnsafeCell 自身は、その値に対して内部可変の操作を安全にはしません。 正しく相互排他していることを、あなた自身が保証しなくてはなりません。","breadcrumbs":"所有権とライフタイム » パス","id":"17","title":"パス"},"18":{"body":"生存性 (liveness) は、この章の次の節で詳しく説明する ライフタイム (lifetime) とは違うことに注意してください。 大雑把に言うと、参照を参照外しできるとき、 その参照は、プログラム中のある時点で 生存している といえます。 共有参照は、文字通り到達不可能な場合(たとえば、解放済みメモリやリークしてるメモリに 存在している場合)を除いて、常に生存しています。 可変参照には、 又貸し というプロセスがあるので、到達できても生存して いない ことがあります。 可変参照は、その子孫を他の共有参照または可変参照に又貸しすることができます。 又貸しした参照は、派生したすべての又貸しの有効期限が切れると、再び生存することになります。 例えば、可変参照は、その参照先の一つのフィールドを指す参照を又貸しすることができます。 let x = &mut (1, 2);\n{ // x のフィールドを又借りする let y = &mut x.0; // この時点で y は生存しているが、x は生存していない *y = 3;\n}\n// y がスコープ外に出たので、x が再び生存中になる\n*x = (5, 7); 複数の 可変参照に又貸しすることも可能ですが、その複数の参照は互いに素でなくてはいけません。 つまり、どの参照も他の参照の先祖であってはいけないということです。 Rust は、構造体のフィールドが互いに素であることを静的に証明できるので、 フィールドの又貸しが可能です。 let x = &mut (1, 2);\n{ // x を 2 つの互いに素なフィールドに又貸しする let y = &mut x.0; let z = &mut x.1; // y と z は生存しているが、x は生存していない *y = 3; *z = 4;\n}\n// y と z がスコープ外に出たので、x がふたたび生存中になる\n*x = (5, 7); ただし、多くの場合 Rust は十分に賢くないので、複数の借り手が互いに素であることを証明できません。 これはそのような又貸しが禁じられているという意味ではなく 、 単に Rust が期待するほど賢くないというだけです。 話を単純にするために、変数を参照型の一種、 所有中 参照、と仮定してみましょう。 所有中参照は、可変参照とほとんど同じ意味を持ちます。 可変参照または共有参照に又貸しでき、それによって生存中ではなくなります。 生存中の所有中参照は、値を解放(ムーブアウト)できるという特殊な性質があります (とはいえ、可変参照は値をスワップアウトできますが)。 この能力は、 生存中の 所有中参照のみに与えられています。 そうでなければ、早すぎるタイミングでその他の参照を無効にすることになります。 不適切な値の変更を lint が検出するので、mut とマークされた変数だけが変更可能なように借用されます。 Box がまさに所有中参照のように振る舞うということを覚えておくと良いでしょう。 Box は値を解放することができ、変数が解放された時と同様に Rust はそのパスについて推論するための 十分な情報を持っています。","breadcrumbs":"所有権とライフタイム » 生存性","id":"18","title":"生存性"},"19":{"body":"生存性とパスを定義したので、ようやく 別名 を適切に定義できます。 可変参照は、その先祖か子孫に他の参照が存在している時、別名を持つといいます。 (二つの生存中の参照が互いの別名になっている、と言うこともできます。 意味上の違いは特にありませんが、プログラムの構造の健全性を検証する時には、 この考え方の方がわかりやすいでしょう。) これだけです。すげーわかりやすいですよね? この定義に必要なすべての用語を定義するのに 2 ページ必要に なりましたが・・・。すごく、分かりやすい。でしょ? 実際には、もう少し複雑です。参照に加えて Rust には 生ポインタ もあります。 *const T と *mut T のことです。 生ポインタには、継承可能な所有権も別名という概念もありません。 そのため、Rust は生ポインタを追跡する努力を一切しませんし、生ポインタは極めて危険です。 生ポインタが別名という意味をどの程度持っているのか、というのはまだ答えの出てない問題です。 しかし、この節で出てきた定義が健全であるためには、生ポインタを使うとある種の生存パスが わからなくなるということ重要です。","breadcrumbs":"所有権とライフタイム » 別名付け","id":"19","title":"別名付け"},"2":{"body":"安全な「高級」言語のプログラマは、本質的なジレンマに直面します。何が欲しいかをただ伝えるだけで、 それがどのように実現されるかを悩む必要がないのは 本当に 素晴らしいのですが、それが許容できないほどの ひどいパフォーマンスをもたらすこともあります。 期待するパフォーマンスを得るために、明瞭で慣用的なやり方を断念しなくてはいけないかもしれないし、 または、どうしようもないと諦めて、それほど心地よくない 危険な 言語で実装することを決心するかもしれません。 もっと悪いことに、オペレーティングシステムと直接話したい時には、 C 言語 という危険な言語で 会話を しなくてはなりません 。C 言語はつねに存在し、逃れることはできないのです。 C 言語はプログラミングの世界での橋渡し言語だからです。 他の安全な言語も、たいてい C 言語のインターフェースを世界中に野放しでさらしています。 理由の如何にかかわらず、あなたのプログラムが C 言語と会話した瞬間に、安全ではなくなるのです。 とはいえ、Rust は 完全に 安全なプログラミング言語です。 ・・・いえ、Rust は安全なプログラミング言語を もっている と言えます。一歩下がって考えてみましょう。 Rust は 2 つのプログラミング言語から成り立っていると言えます。 安全な Rust と 危険な Rust です。 安全な Rust は、本当に全く安全ですが、危険な Rust は、当然ですが、本当に全く安全では ありません 。 実際、危険な Rust では本当に 本当に 危険な事ができるのです。 安全な Rust は真の Rust プログラミング言語です。もしあなたが安全な Rust だけでコードを書くなら、 型安全やメモリ安全性などを心配する必要はないでしょう。 ヌルポインタやダングリングポインタ、馬鹿げた「未定義な挙動」などに我慢する必要はないのです。 なんて素晴らしいんだ。 また、標準ライブラリにはすぐに使える十分なユーティリティが揃っています。 それを使えば、ハイパフォーマンスでかっこいいアプリケーションやライブラリを、 正当で慣用的な安全な Rust で書けるでしょう。 でも、もしかしたらあなたは他の言語と話したいかもしれません。もしかしたら標準ライブラリが提供していない 低レイヤを抽象化しようとしているのかもしれないし、もしかしたら標準ライブラリを 書いている (標準ライブラリは Rust で書かれています)のかもしれないし、もしかしたらあなたがやりたい事は、 型システムが理解できない、 ぎょっとするようなこと かもしれません。 もしかしたらあなたには 危険な Rust が必要かもしれません。 危険な Rust のルールとセマンティクスは、安全な Rust と同じです。 ただし、危険な Rust ではちょっと 多くの 事ができ、それは間違いなく安全ではありません。 危険な Rust であなたができる事は、たったこれだけです。 生ポインタが指す値を得る unsafe な関数を呼ぶ(C 言語で書かれた関数や、intrinsic、生のアロケータなど) unsafe なトレイトを実装する 静的な構造体を変更する これだけです。これらの操作がなぜ「危険」と分類されているかというと、 間違って使うととても恐ろしい「未定義な挙動」を引き起こすからです。 「未定義な挙動」が起きると、コンパイラは、あなたのプログラムにとってどんな悪いことでもできるようになります。 何があっても「未定義な挙動」を起こす べきではない です。 C 言語と違って、Rust では「未定義な挙動」は限定されています。 言語コアは次のような事が起きるのを防ぎます。 ヌルポインタやダングリングポインタの参照外し 未初期化のメモリ を読む ポインタエイリアスルール を破る 不正なプリミティブな値を生成する ダングリング参照、ヌル参照 0 でも 1 でもない bool 値 未定義な enum 判別式 [0x0, 0xD7FF] と [0xE000, 0x10FFFF] 範囲外の char 値 utf8 ではない str 値 他の言語に巻き戻す データ競合 を引き起こす これだけです。これが、Rust が防ぐ「未定義な挙動」の原因です。 もちろん、危険な関数やトレイトが「未定義な挙動」を起こさないための他の制約を作り出す事は可能ですが、 そういった制約が破られた場合、たいてい上の問題のどれかを引き起こします。 コンパイラ intrinsic がその他の制約を生み出し、コードの最適化に関する特別な仮定をすることもあります。 Rust はその他の疑わしい操作については、とても寛容です。 Rust は次の操作を「安全」だと判断します。 デッドロック 競合状態 メモリリーク デストラクタを呼ぶことに失敗する 整数のオーバーフロー プログラムの異常終了 本番環境のデータベースを削除してしまう事 とはいえ、こういうことをできてしまうプログラムは 恐らく 間違っていると言えるでしょう。 Rust はこういった事を起きにくくするためのツールをたくさん提供します。 しかし、これらの問題を完全に防ぐのは現実的ではないと考えられています。","breadcrumbs":"安全と危険のご紹介","id":"2","title":"安全と危険のご紹介"},"20":{"body":"Rust は今まで説明してきたルールを ライフタイム を使って強制します。 ライフタイムとは、要するにプログラム中のスコープの名前です。 参照と、参照を含むものとは、有効なスコープを示すライフタイムでタグ付けられています。 通常、関数本体では、関係するライフタイムの名前を明示することは求められません。 一般に、ローカルコンテキストにおいてライフタイムを気にする必要はまずないからです。 Rust はすべての情報を持っていて、可能な限りすべてを最適にできます。 省略可能な無名スコープや一時変数は、コードがきちんと動くように自動的に導入されます。 しかし関数の境界を超えると、ライフタイムについて気にしなくてはいけなくなります。 ライフタイムは、'a や 'static などアポストロフィーつきの名前を持ちます。 ライフタイムの世界に足を踏み入れるために、 スコープにライフタイムのラベルをつけられるとして、この章の最初のサンプルコードを 「脱糖 (desugar)」してみましょう。 もともとのサンプルコードは、スコープとライフタイムについて、 果糖がたくさん含まれたコーンシロップのように 強烈に 甘い書き方でした。 (訳注:ライフタイムを省略できることは syntax sugar で、元のコードは大量の syntax sugar を使っているので、「甘い」と言っている) なぜなら、すべてを明示的に書くのは 極めて煩わしい からです。 Rust のコードは、積極的な推論と「明らかな」ことの省略とを当てにしています。 let 文が、スコープを暗黙的に導入するというのは、興味深いシンタックスシュガーでしょう。 ほとんどの場合、これは問題になりません。 しかし、複数の変数がお互いを参照している場合は問題になります。 簡単な例として、この単純な Rust コードを脱糖してみましょう。 let x = 0;\nlet y = &x;\nlet z = &y; 借用チェッカは、ライフタイムの長さを最小にしようとするので、 これは次のように脱糖されるでしょう。 // `'a: {` と `&'b x` は正当な構文ではないことに注意してください!\n'a: { let x: i32 = 0; 'b: { // ここで使用されるライフタイムは 'b です。なぜなら 'b で十分だからです。 let y: &'b i32 = &'b x; 'c: { // 'c も同様 let z: &'c &'b i32 = &'c y; } }\n} おっと。こんなふうに書かなければいけないとしたら・・・これはひどいですね。 ここでしばらく時間をとって、簡単な構文を許してくれる Rust に感謝を意を表しましょう。 参照を外のスコープに返す場合は、Rust はより大きいライフタイムを推論することになります。 let x = 0;\nlet z;\nlet y = &x;\nz = y; 'a: { let x: i32 = 0; 'b: { let z: &'b i32; 'c: { // ここでは 'b を使う必要があります。なぜならこの参照は // スコープ `b に渡されるからです。 let y: &'b i32 = &'b x; z = y; } }\n}","breadcrumbs":"所有権とライフタイム » ライフタイム","id":"20","title":"ライフタイム"},"21":{"body":"それでは、以前に出した例を見てみましょう。 fn as_str(data: &u32) -> &str { let s = format!(\"{}\", data); &s\n} は次のように脱糖されます。 fn as_str<'a>(data: &'a u32) -> &'a str { 'b: { let s = format!(\"{}\", data); return &'a s; }\n} as_str のシグネチャは、 ある ライフタイムを持つ u32 への参照をとり、 その参照と 同じ長さだけ 生きる str への参照を生成することを約束します。 このシグネチャが問題になるかもしれないと、すでに話しました。 このシグネチャは、引数の u32 を指す参照が生成されたスコープか、もしくは それより以前のスコープ で、str を探すことを意味します。これはなかなか難しい注文です。 それから文字列 s を計算し、その参照を返します。 この関数は、返される参照が 'a より長生きすることを約束しているので、この参照のライフタイムとして 'a を使うことを推論します。 残念なことに、s はスコープ 'b の中で定義されているので、 この推論が妥当になるためには、'b が 'a を含んでいなくてはなりません。 ところがこれは明らかに成立しません。'a はこの関数呼び出しそものを含んでいるからです。 結局、この関数は参照先より長生きする参照を生成してしまいました。 そしてこれは 文字通り 、参照がやってはいけないことの一番目でした。 コンパイラは正当に怒りだします。 よりわかりやすくするために、この例を拡張してみます。 fn as_str<'a>(data: &'a u32) -> &'a str { 'b: { let s = format!(\"{}\", data); return &'a s }\n} fn main() { 'c: { let x: u32 = 0; 'd: { // この x の借用は、x が有効な全期間より短くて良いので、無名スコープが導入されます。 // as_str は、この呼び出しより前のどこかにある str を見つけなければいけませんが、 // そのような str が無いのはあきらかです。 println!(\"{}\", as_str::<'d>(&'d x)); } }\n} ちくしょう! この関数を正しく書くと、当然次のようになります。 fn to_string(data: &u32) -> String { format!(\"{}\", data)\n} この関数が所有する値を関数内で生成し、それを返さなくてはいけません! str が &'a u32 のフィールドだとしたら、&'a str を返せるのですが、 もちろんそれはありえません。 (そういえば、単に文字列リテラルを返すこともできたかもしれません。 文字列リテラルはグローバルで、スタックの底に存在すると解釈できますから。 ただこれはこの関数の実装を ほんの少しだけ 制限してしまいますね。)","breadcrumbs":"所有権とライフタイム » 例:参照先より長く生きる参照","id":"21","title":"例:参照先より長く生きる参照"},"22":{"body":"もう一つの例はどうでしょう。 let mut data = vec![1, 2, 3];\nlet x = &data[0];\ndata.push(4);\nprintln!(\"{}\", x); 'a: { let mut data: Vec = vec![1, 2, 3]; 'b: { // スコープ 'b は次の貸し出しに必要なだけ大きくなります。 // (`println!` を含むまで大きくなります) let x: &'b i32 = Index::index::<'b>(&'b data, 0); 'c: { // &mut は長生きする必要が無いので、一時的なスコープ 'c が作られます。 Vec::push(&'c mut data, 4); } println!(\"{}\", x); }\n} これは、すこし分かりにくいですが面白い問題です。 私たちは、Rust が次のような理由で、このプログラムを拒否するだろうと思っています。 つまり、push するために data への可変参照を取ろうとするとき、 data の子孫への共有参照 x が生存中です。 これは可変参照の別名となり、参照の 二番目 のルールに違反します。 ところが、Rust がこのプログラムを悪いと推論するやり方は 全く違う のです。 Rust は、x が data の部分パスへの参照であることは理解しません。 Rust は Vec のことなど何も知らないのです。 Rust に 見えている のは、x は println! のためにスコープ 'b の中で生存しなくてはならないことです。 さらに、Index::index のシグネチャは、data を参照する参照が スコープ 'b の中で生存することを要求します。 push を呼び出すときに、&'c mut data を取ろうとすることを Rust は理解します。 Rust はスコープ 'c が スコープ 'b に含まれていることを知っているので、 このプログラムを拒否します。 なぜなら、&'b data はまだ生きているからです。 ここでは、ライフタイムをチェックするシステムは、私たちが維持したい参照の意味論に比べて とても荒いことを見てきました。 ほとんどの場合、 これで全く大丈夫 です。 私たちが書いたコードをコンパイラに説明するために丸一日費やさなくてもいいからです。 しかし、ライフタイムのチェックがとてもバカなために、Rust の 真の 意味論的には全く正しいプログラムでも拒否されることがあるのです。","breadcrumbs":"所有権とライフタイム » 例:可変参照の別名付け","id":"22","title":"例:可変参照の別名付け"},"23":{"body":"次のコードを見てみましょう。 struct Foo; impl Foo { fn mutate_and_share(&mut self) -> &Self { &*self } fn share(&self) {}\n} fn main() { let mut foo = Foo; let loan = foo.mutate_and_share(); foo.share();\n} このコードはコンパイルを通ると思うかもしれません。 mutate_and_share は、foo を一時的に変更可能に借用しますが、 共有参照を返します。 そうすると、foo は変更可能には借用されていないので、 foo.share() は成功すると思うでしょう。 ところが、このコードをコンパイルすると・・・。 :11:5: 11:8 error: cannot borrow `foo` as immutable because it is also borrowed as mutable\n(エラー: `foo`は可変で借用されているので、不変で借用できません)\n:11 foo.share(); ^~~\n:10:16: 10:19 note: previous borrow of `foo` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `foo` until the borrow ends\n(注釈: 以前の`foo`の借用はここで起きています。可変での借用は、その借用が終わるまで、その後のムーブや、借用、`foo`の変更を防ぎます)\n:10 let loan = foo.mutate_and_share(); ^~~\n:12:2: 12:2 note: previous borrow ends here\n(注釈: 以前の借用はここで終了しています)\n:8 fn main() {\n:9 let mut foo = Foo;\n:10 let loan = foo.mutate_and_share();\n:11 foo.share();\n:12 } ^ 何が起こったのでしょう? 前の節の 2 つ目のサンプル と全く同じ推論を行ったのです。 このコードを脱糖すると、次のようになります。 struct Foo; impl Foo { fn mutate_and_share<'a>(&'a mut self) -> &'a Self { &'a *self } fn share<'a>(&'a self) {}\n} fn main() { 'b: { let mut foo: Foo = Foo; 'c: { let loan: &'c Foo = Foo::mutate_and_share::<'c>(&'c mut foo); 'd: { Foo::share::<'d>(&'d foo); } } }\n} loan のライフタイムと mutate_and_share のシグネチャとのため、 &mut foo のライフタイムは 'c に延長されなくてはなりません。 そして、share を呼ぼうとするとき、&'c mut foo の別名を取ろうとすると認識され、大失敗に終わるのです。 このプログラムは、私たちにとって重要な参照の意味的には全く正しいのですが、 ライフタイムシステムはこのプログラムを処理するには粗すぎるのです。 TODO: その他のよくある問題は? 主に SEME 領域とか?","breadcrumbs":"所有権とライフタイム » ライフタイムシステムの限界","id":"23","title":"ライフタイムシステムの限界"},"24":{"body":"よくあるパターンをより易しく書けるように、Rust では関数シグネチャのライフタイムを省略できます。 ライフタイムポジション とは、型の定義においてライフタイムを書ける場所のことです。 &'a T\n&'a mut T\nT<'a> ライフタイムポジションは、「入力」または「出力」という形で現れます。 fn 定義では、入力とは仮引数の型のことで、出力とは結果の型のことです。 fn foo(s: *str) -> (&str, &str) では、入力ポジションのライフタイムが一つ省略され、 出力ポジションのライフタイムが二つ省略されています。 fn メソッド定義の入力ポジションには、 メソッドの impl ヘッダに現れるライフタイムは含まれません。 (デフォルトメソッドの場合の trait ヘッダに現れるライフタイムも含まれません。) 将来のバージョンでは、impl ヘッダのライフタイムの省略も同様に可能になるでしょう。 省略のルールは次の通りです。 入力ポジションの省略されたライフタイムは、それぞれ別のライフタイムパラメータになります。 入力ポジションのライフタイム(省略されているかどうかに関わらず)が一つしか無い場合、 省略された出力ライフタイム全てにそのライフタイムが割り当てられます。 入力ポジションに複数のライフタイムがあって、そのうちの一つが &self または &mut self の場合、 省略された出力ライフタイム全てに self のライフタイムが割り当てられます。 それ以外の場合は、出力のライフタイムを省略するとエラーになります。 例: fn print(s: &str); // 省略した場合\nfn print<'a>(s: &'a str); // 展開した場合 fn debug(lvl: uint, s: &str); // 省略した場合\nfn debug<'a>(lvl: uint, s: &'a str); // 展開した場合 fn substr(s: &str, until: uint) -> &str; // 省略した場合\nfn substr<'a>(s: &'a str, until: uint) -> &'a str; // 展開した場合 fn get_str() -> &str; // エラー fn frob(s: &str, t: &str) -> &str; // エラー fn get_mut(&mut self) -> &mut T; // 省略した場合\nfn get_mut<'a>(&'a mut self) -> &'a mut T; // 展開した場合 fn args(&mut self, args: &[T]) -> &mut Command // 省略した場合\nfn args<'a, 'b, T: ToCStr>(&'a mut self, args: &'b [T]) -> &'a mut Command // 展開した場合 fn new(buf: &mut [u8]) -> BufWriter; // 省略した場合\nfn new<'a>(buf: &'a mut [u8]) -> BufWriter<'a> // 展開した場合","breadcrumbs":"所有権とライフタイム » ライフタイムの省略","id":"24","title":"ライフタイムの省略"},"25":{"body":"アンセーフなコードはときに、参照やライフタイムを何も無いところから生み出したりします。 そのようなライフタイムは、 無制限 なライフタイムとして世界に登場します。 最もよくあるのは、生ポインタの参照外しをし、無制限のライフタイムを持つ参照を作り出すというケースです。 このライフタイムは、そのコンテキストが必要とするだけ大きくなります。そしてこれは 'static よりも強力なしくみです。 &'static &'a T は型チェックをパスしませんが、無制限のライフタイムを使うと必要に応じて &'a &'a T となるからです。 しかし、ほとんどの意図と目的においては、無制限のライフタイムを 'static と解釈できます。 参照が 'static であることはまずありえないので、これはおそらく間違っていると言えるでしょう。 おもに transmute と transmute_copy とがこの状況を作り出します。 できるだけ速く、とくに関数の境界では、無制限のライフタイムに制限をつけるように気をつけて下さい。 関数の入力から導出されない出力のライフタイムは無制限となります。例えば、 fn get_str<'a>() -> &'a str; このコードは無制限のライフタイムを持った &str を生成します。 無制限のライフタイムを避ける最も簡単な方法は、関数境界でライフタイムを省略することです。 出力ライフタイムが省略された場合、入力ライフタイムで制限されなくては いけません 。 もちろん、 間違った ライフタイムで制限されるかもしれませんが、たいていの場合は、メモリ安全性が侵されるのではなく、コンパイルエラーにつながります。 関数内部でライフタイムを制限することは、エラーを生みやすくなります。 ライフタイムを制限する安全で簡単な方法は、制限つきライフタイムの関数から返される値を使うことです。 しかし、これができない場合は、その参照を特定のライフタイムがついた場所に置くと良いでしょう。 残念ながら、関数内のすべてのライフタイムに名前をつけるのは不可能です。","breadcrumbs":"所有権とライフタイム » 無制限のライフタイム","id":"25","title":"無制限のライフタイム"},"26":{"body":"Rust の Fn トレイトはちょっとした魔法です。例えば、次のように書くことができます。 struct Closure { data: (u8, u16), func: F,\n} impl Closure where F: Fn(&(u8, u16)) -> &u8,\n{ fn call(&self) -> &u8 { (self.func)(&self.data) }\n} fn do_it(data: &(u8, u16)) -> &u8 { &data.0 } fn main() { let clo = Closure { data: (0, 1), func: do_it }; println!(\"{}\", clo.call());\n} ライフタイムの節と同じように単純に脱糖しようとすると、問題が起こります。 struct Closure { data: (u8, u16), func: F,\n} impl Closure // where F: Fn(&'??? (u8, u16)) -> &'??? u8,\n{ fn call<'a>(&'a self) -> &'a u8 { (self.func)(&self.data) }\n} fn do_it<'b>(data: &'b (u8, u16)) -> &'b u8 { &'b data.0 } fn main() { 'x: { let clo = Closure { data: (0, 1), func: do_it }; println!(\"{}\", clo.call()); }\n} F のトレイト境界は、一体どうすれば表現できるのでしょう? なんらかのライフタイムを提供する必要がありますが、問題のライフタイムは call 関数が呼ばれるまで名前が無いのです。さらに、ライフタイムは固定されていません。 &self に どんな ライフタイムが割り当てられても、call は動作します。 この問題は、高階トレイト境界(HRTB: Higher-Rank Trait Bounds)という魔法で解決できます。 HRTB を使うとつぎのように脱糖できます。 where for<'a> F: Fn(&'a (u8, u16)) -> &'a u8, (Fn(a, b, c) -> d 自体が、まだ仕様が安定していない 本当の Fn トレイトの糖衣構文です。) for<'a> は、「'a に何を選んだとしても」という意味で、つまり F が満たさなくてはならないトレイト境界の 無限リスト を生成します。強烈でしょう? Fn トレイトを除けば、HRTB が使われる場所はほとんどありません。Fn トレイトにおいても、ほとんどの場合は魔法の糖衣構文が良いされています。","breadcrumbs":"所有権とライフタイム » 高階トレイト境界","id":"26","title":"高階トレイト境界"},"27":{"body":"Although Rust doesn't have any notion of structural inheritance, it does include subtyping. In Rust, subtyping derives entirely from lifetimes. Since lifetimes are scopes, we can partially order them based on the contains (outlives) relationship. We can even express this as a generic bound. Subtyping on lifetimes is in terms of that relationship: if 'a: 'b (\"a contains b\" or \"a outlives b\"), then 'a is a subtype of 'b. This is a large source of confusion, because it seems intuitively backwards to many: the bigger scope is a subtype of the smaller scope. This does in fact make sense, though. The intuitive reason for this is that if you expect an &'a u8, then it's totally fine for me to hand you an &'static u8, in the same way that if you expect an Animal in Java, it's totally fine for me to hand you a Cat. Cats are just Animals and more , just as 'static is just 'a and more . (Note, the subtyping relationship and typed-ness of lifetimes is a fairly arbitrary construct that some disagree with. However it simplifies our analysis to treat lifetimes and types uniformly.) Higher-ranked lifetimes are also subtypes of every concrete lifetime. This is because taking an arbitrary lifetime is strictly more general than taking a specific one.","breadcrumbs":"所有権とライフタイム » Subtyping and Variance","id":"27","title":"Subtyping and Variance"},"28":{"body":"Variance is where things get a bit complicated. Variance is a property that type constructors have with respect to their arguments. A type constructor in Rust is a generic type with unbound arguments. For instance Vec is a type constructor that takes a T and returns a Vec. & and &mut are type constructors that take two inputs: a lifetime, and a type to point to. A type constructor's variance is how the subtyping of its inputs affects the subtyping of its outputs. There are two kinds of variance in Rust: F is variant over T if T being a subtype of U implies F is a subtype of F (subtyping \"passes through\") F is invariant over T otherwise (no subtyping relation can be derived) (For those of you who are familiar with variance from other languages, what we refer to as \"just\" variance is in fact covariance . Rust has contravariance for functions. The future of contravariance is uncertain and it may be scrapped. For now, fn(T) is contravariant in T, which is used in matching methods in trait implementations to the trait definition. Traits don't have inferred variance, so Fn(T) is invariant in T). Some important variances: &'a T is variant over 'a and T (as is *const T by metaphor) &'a mut T is variant over 'a but invariant over T Fn(T) -> U is invariant over T, but variant over U Box, Vec, and all other collections are variant over the types of their contents UnsafeCell, Cell, RefCell, Mutex and all other interior mutability types are invariant over T (as is *mut T by metaphor) To understand why these variances are correct and desirable, we will consider several examples. We have already covered why &'a T should be variant over 'a when introducing subtyping: it's desirable to be able to pass longer-lived things where shorter-lived things are needed. Similar reasoning applies to why it should be variant over T. It is reasonable to be able to pass &&'static str where an &&'a str is expected. The additional level of indirection does not change the desire to be able to pass longer lived things where shorted lived things are expected. However this logic doesn't apply to &mut. To see why &mut should be invariant over T, consider the following code: fn overwrite(input: &mut T, new: &mut T) { *input = *new;\n} fn main() { let mut forever_str: &'static str = \"hello\"; { let string = String::from(\"world\"); overwrite(&mut forever_str, &mut &*string); } // Oops, printing free'd memory println!(\"{}\", forever_str);\n} The signature of overwrite is clearly valid: it takes mutable references to two values of the same type, and overwrites one with the other. If &mut T was variant over T, then &mut &'static str would be a subtype of &mut &'a str, since &'static str is a subtype of &'a str. Therefore the lifetime of forever_str would successfully be \"shrunk\" down to the shorter lifetime of string, and overwrite would be called successfully. string would subsequently be dropped, and forever_str would point to freed memory when we print it! Therefore &mut should be invariant. This is the general theme of variance vs invariance: if variance would allow you to store a short-lived value into a longer-lived slot, then you must be invariant. However it is sound for &'a mut T to be variant over 'a. The key difference between 'a and T is that 'a is a property of the reference itself, while T is something the reference is borrowing. If you change T's type, then the source still remembers the original type. However if you change the lifetime's type, no one but the reference knows this information, so it's fine. Put another way: &'a mut T owns 'a, but only borrows T. Box and Vec are interesting cases because they're variant, but you can definitely store values in them! This is where Rust gets really clever: it's fine for them to be variant because you can only store values in them via a mutable reference ! The mutable reference makes the whole type invariant, and therefore prevents you from smuggling a short-lived type into them. Being variant allows Box and Vec to be weakened when shared immutably. So you can pass a &Box<&'static str> where a &Box<&'a str> is expected. However what should happen when passing by-value is less obvious. It turns out that, yes, you can use subtyping when passing by-value. That is, this works: fn get_box<'a>(str: &'a str) -> Box<&'a str> { // string literals are `&'static str`s Box::new(\"hello\")\n} Weakening when you pass by-value is fine because there's no one else who \"remembers\" the old lifetime in the Box. The reason a variant &mut was trouble was because there's always someone else who remembers the original subtype: the actual owner. The invariance of the cell types can be seen as follows: & is like an &mut for a cell, because you can still store values in them through an &. Therefore cells must be invariant to avoid lifetime smuggling. Fn is the most subtle case because it has mixed variance. To see why Fn(T) -> U should be invariant over T, consider the following function signature: // 'a is derived from some parent scope\nfn foo(&'a str) -> usize; This signature claims that it can handle any &str that lives at least as long as 'a. Now if this signature was variant over &'a str, that would mean fn foo(&'static str) -> usize; could be provided in its place, as it would be a subtype. However this function has a stronger requirement: it says that it can only handle &'static strs, and nothing else. Giving &'a strs to it would be unsound, as it's free to assume that what it's given lives forever. Therefore functions are not variant over their arguments. To see why Fn(T) -> U should be variant over U, consider the following function signature: // 'a is derived from some parent scope\nfn foo(usize) -> &'a str; This signature claims that it will return something that outlives 'a. It is therefore completely reasonable to provide fn foo(usize) -> &'static str; in its place. Therefore functions are variant over their return type. *const has the exact same semantics as &, so variance follows. *mut on the other hand can dereference to an &mut whether shared or not, so it is marked as invariant just like cells. This is all well and good for the types the standard library provides, but how is variance determined for type that you define? A struct, informally speaking, inherits the variance of its fields. If a struct Foo has a generic argument A that is used in a field a, then Foo's variance over A is exactly a's variance. However this is complicated if A is used in multiple fields. If all uses of A are variant, then Foo is variant over A Otherwise, Foo is invariant over A use std::cell::Cell; struct Foo<'a, 'b, A: 'a, B: 'b, C, D, E, F, G, H> { a: &'a A, // variant over 'a and A b: &'b mut B, // variant over 'b and invariant over B c: *const C, // variant over C d: *mut D, // invariant over D e: Vec, // variant over E f: Cell, // invariant over F g: G, // variant over G h1: H, // would also be variant over H except... h2: Cell, // invariant over H, because invariance wins\n}","breadcrumbs":"所有権とライフタイム » Variance","id":"28","title":"Variance"},"29":{"body":"We have seen how lifetimes provide us some fairly simple rules for ensuring that we never read dangling references. However up to this point we have only ever interacted with the outlives relationship in an inclusive manner. That is, when we talked about 'a: 'b, it was ok for 'a to live exactly as long as 'b. At first glance, this seems to be a meaningless distinction. Nothing ever gets dropped at the same time as another, right? This is why we used the following desugaring of let statements: let x;\nlet y; { let x; { let y; }\n} Each creates its own scope, clearly establishing that one drops before the other. However, what if we do the following? let (x, y) = (vec![], vec![]); Does either value strictly outlive the other? The answer is in fact no , neither value strictly outlives the other. Of course, one of x or y will be dropped before the other, but the actual order is not specified. Tuples aren't special in this regard; composite structures just don't guarantee their destruction order as of Rust 1.0. We could specify this for the fields of built-in composites like tuples and structs. However, what about something like Vec? Vec has to manually drop its elements via pure-library code. In general, anything that implements Drop has a chance to fiddle with its innards during its final death knell. Therefore the compiler can't sufficiently reason about the actual destruction order of the contents of any type that implements Drop. So why do we care? We care because if the type system isn't careful, it could accidentally make dangling pointers. Consider the following simple program: struct Inspector<'a>(&'a u8); fn main() { let (inspector, days); days = Box::new(1); inspector = Inspector(&days);\n} This program is totally sound and compiles today. The fact that days does not strictly outlive inspector doesn't matter. As long as the inspector is alive, so is days. However if we add a destructor, the program will no longer compile! struct Inspector<'a>(&'a u8); impl<'a> Drop for Inspector<'a> { fn drop(&mut self) { println!(\"I was only {} days from retirement!\", self.0); }\n} fn main() { let (inspector, days); days = Box::new(1); inspector = Inspector(&days); // Let's say `days` happens to get dropped first. // Then when Inspector is dropped, it will try to read free'd memory!\n} :12:28: 12:32 error: `days` does not live long enough\n:12 inspector = Inspector(&days); ^~~~\n:9:11: 15:2 note: reference must be valid for the block at 9:10...\n:9 fn main() {\n:10 let (inspector, days);\n:11 days = Box::new(1);\n:12 inspector = Inspector(&days);\n:13 // Let's say `days` happens to get dropped first.\n:14 // Then when Inspector is dropped, it will try to read free'd memory! ...\n:10:27: 15:2 note: ...but borrowed value is only valid for the block suffix following statement 0 at 10:26\n:10 let (inspector, days);\n:11 days = Box::new(1);\n:12 inspector = Inspector(&days);\n:13 // Let's say `days` happens to get dropped first.\n:14 // Then when Inspector is dropped, it will try to read free'd memory!\n:15 } Implementing Drop lets the Inspector execute some arbitrary code during its death. This means it can potentially observe that types that are supposed to live as long as it does actually were destroyed first. Interestingly, only generic types need to worry about this. If they aren't generic, then the only lifetimes they can harbor are 'static, which will truly live forever . This is why this problem is referred to as sound generic drop . Sound generic drop is enforced by the drop checker . As of this writing, some of the finer details of how the drop checker validates types is totally up in the air. However The Big Rule is the subtlety that we have focused on this whole section: For a generic type to soundly implement drop, its generics arguments must strictly outlive it. Obeying this rule is (usually) necessary to satisfy the borrow checker; obeying it is sufficient but not necessary to be sound. That is, if your type obeys this rule then it's definitely sound to drop. The reason that it is not always necessary to satisfy the above rule is that some Drop implementations will not access borrowed data even though their type gives them the capability for such access. For example, this variant of the above Inspector example will never access borrowed data: struct Inspector<'a>(&'a u8, &'static str); impl<'a> Drop for Inspector<'a> { fn drop(&mut self) { println!(\"Inspector(_, {}) knows when *not* to inspect.\", self.1); }\n} fn main() { let (inspector, days); days = Box::new(1); inspector = Inspector(&days, \"gadget\"); // Let's say `days` happens to get dropped first. // Even when Inspector is dropped, its destructor will not access the // borrowed `days`.\n} Likewise, this variant will also never access borrowed data: use std::fmt; struct Inspector(T, &'static str); impl Drop for Inspector { fn drop(&mut self) { println!(\"Inspector(_, {}) knows when *not* to inspect.\", self.1); }\n} fn main() { let (inspector, days): (Inspector<&u8>, Box); days = Box::new(1); inspector = Inspector(&days, \"gadget\"); // Let's say `days` happens to get dropped first. // Even when Inspector is dropped, its destructor will not access the // borrowed `days`.\n} However, both of the above variants are rejected by the borrow checker during the analysis of fn main, saying that days does not live long enough. The reason is that the borrow checking analysis of main does not know about the internals of each Inspector's Drop implementation. As far as the borrow checker knows while it is analyzing main, the body of an inspector's destructor might access that borrowed data. Therefore, the drop checker forces all borrowed data in a value to strictly outlive that value.","breadcrumbs":"所有権とライフタイム » Drop Check","id":"29","title":"Drop Check"},"3":{"body":"安全な Rust とアンセーフな Rust とはどう関係しているのでしょうか? どのように影響し合うのでしょうか? unsafe キーワードがインターフェースとなり、安全な Rust とアンセーフな Rust とを分離します。 このため、安全な Rust は安全な言語で、アンセーフな部分は完全に境界外に管理されている、と言うことができるのです。 unsafe は 2 つの目的に使われます。コンパイラがチェックできない契約が存在する事を宣言することと、 コードが契約に準拠していることがプログラマによってチェックされた事を宣言する事です。 関数 と トレイトの宣言 に未チェックな契約が存在する事を、unsafe を使って示すことができます。 関数に unsafe を使うと、ドキュメントを読んで、 要求された契約を守るように関数を使うことを、その関数のユーザーに要請することになります。 トレイトの宣言に unsafe を使うと、そのトレイトを実装するユーザーに対し、ドキュメントをチェックして契約を守るよう要請します。 コードブロックに使われた unsafe は、そのブロックで呼ばれているアンセーフな関数が要求する契約は守られていて、コードが信頼出来る事を意味します。unsafe をトレイトの実装に使うと、その実装がトレイトのドキュメントに書かれている契約に準拠している事を示します。 標準ライブラリにはいくつものアンセーフな関数があります。例えば、 slice::get_unchecked は未チェックのインデックス参照を実行します。自由自在にメモリ安全性に違反できます。 mem::transmute は、型安全の仕組みを好きなようにすり抜けて、ある値が特定の型であると再解釈します(詳細は 変換 をみてください)。 サイズが確定している型の生ポインタには、固有の offset メソッドがあります。渡されたオフセットが LLVM が定める \"境界内\" になければ、未定義の挙動を引き起こします。 すべての FFI 関数は unsafe です。なぜなら Rust コンパイラは、他の言語が実行するどんな操作もチェックできないからです。 Rust 1.0 現在、アンセーフなトレイトは 2 つしかありません。 Send は API を持たないマーカートレイトで、実装された型が他のスレッドに安全に送れる(ムーブできる)ことを約束します。 Sync もマーカートレイトで、このトレイトを実装した型は、共有された参照を使って安全に複数のスレッドで共有できる事を約束します。 また、多くの Rust 標準ライブラリは内部でアンセーフな Rust を使っています。ただ、標準ライブラリの 実装はプログラマが徹底的にチェックしているので、アンセーフな Rust の上に実装された安全な Rust は安全であると仮定して良いでしょう。 このように分離する目的は、結局のところ、安全な Rust のたった一つの基本的な性質にあります。 どうやっても、安全な Rust では未定義な挙動を起こせない。 このように安全とアンセーフを分けると、安全な Rust は、自分が利用するアンセーフな Rust が正しく書かれている事、 つまりアンセーフな Rust がそれが守るべき契約を実際に守っている事、を本質的に信頼しなくてはいけません。 逆に、アンセーフな Rust は安全な Rust を注意して信頼しなくてはいけません。 例えば、Rust には PartialOrdトレイトと Ordトレイトがあり、単に比較可能な型と全順序が 定義されている型(任意の値が同じ型の他の値と比べて等しいか、大きいか、小さい)とを区別します。 順序つきマップの BTreeMap は半順序の型には使えないので、キーとして使われる型が Ordトレイトを 実装している事を要求します。 しかし BTreeMap の実装ではアンセーフな Rust が使われていて、アンセーフな Rust は渡された Ord の実装が 適切であるとは仮定できません。 BTreeMap 内部のアンセーフな部分は、キー型の Ord の実装が全順序ではない場合でも、必要な契約が すべて守られるよう注意深く書かれなくてはいけません。 アンセーフな Rust は安全な Rust を無意識には信頼できません。アンセーフな Rust コードを書くときには、 安全な Rust の特定のコードのみに依存する必要があり、 安全な Rust が将来にわたって同様の安全性を提供すると仮定してはいけません。 この問題を解決するために unsafe なトレイトが存在します。理論上は、BTreeMap 型は キーが Ord ではなく、新しいトレイトUnsafeOrd を実装する事を要求する事ができます。 このようなコードになるでしょう。 use std::cmp::Ordering; unsafe trait UnsafeOrd { fn cmp(&self, other: &Self) -> Ordering;\n} この場合、UnsafeOrd を実装する型は、このトレイトが期待する契約に準拠している事を示すために unsafe キーワードを使うことになります。 この状況では、BTreeMap 内部のアンセーフな Rust は、キー型が UnsafeOrd を正しく実装していると 信用する事ができます。もしそうで無ければ、それはトレイトの実装の問題であり、 これは Rust の安全性の保証と一致しています。 トレイトに unsafe をつけるかどうかは API デザインにおける選択です。 Rust では従来 unsafe なトレイトを避けてきました。そうしないとアンセーフな Rust が 蔓延してしまい、好ましくないからです。 Send と Sync が unsafe となっているのは、スレッドの安全性が 基本的な性質 であり、 間違った Ord の実装に対して危険なコードが防衛できるのと同様の意味では防衛できないからです。 あなたが宣言したトレイトを unsafe とマークするかどうかも、同じようにじっくりと考えてください。 もし unsafe なコードがそのトレイトの間違った実装から防御することが合理的に不可能であるなら、 そのトレイトを unsafe とするのは合理的な選択です。 余談ですが、unsafe なトレイトである Send と Sync は、それらを実装する事が安全だと 実証可能な場合には自動的に実装されます。 Send は、Send を実装した型だけから構成される型に対して、自動的に実装されます。 Sync は、Sync を実装した型だけから構成される型に対して、自動的に実装されます。 これが安全な Rust とアンセーフな Rust のダンスです。 これは、安全な Rust をできるだけ快適に使えるように、しかしアンセーフな Rust を書くには それ以上の努力と注意深さが要求されるようなデザインになっています。 この本の残りでは、どういう点に注意しなくはいけないのか、 アンセーフな Rust を維持するための契約とは何なのかを議論します。","breadcrumbs":"安全と危険のご紹介 » 安全と危険の相互作用","id":"3","title":"安全と危険の相互作用"},"30":{"body":"The precise rules that govern drop checking may be less restrictive in the future. The current analysis is deliberately conservative and trivial; it forces all borrowed data in a value to outlive that value, which is certainly sound. Future versions of the language may make the analysis more precise, to reduce the number of cases where sound code is rejected as unsafe. This would help address cases such as the two Inspectors above that know not to inspect during destruction. In the meantime, there is an unstable attribute that one can use to assert (unsafely) that a generic type's destructor is guaranteed to not access any expired data, even if its type gives it the capability to do so. That attribute is called may_dangle and was introduced in [RFC 1327] (https://github.com/rust-lang/rfcs/blob/master/text/1327-dropck-param-eyepatch.md). To deploy it on the Inspector example from above, we would write: struct Inspector<'a>(&'a u8, &'static str); unsafe impl<#[may_dangle] 'a> Drop for Inspector<'a> { fn drop(&mut self) { println!(\"Inspector(_, {}) knows when *not* to inspect.\", self.1); }\n} Use of this attribute requires the Drop impl to be marked unsafe because the compiler is not checking the implicit assertion that no potentially expired data (e.g. self.0 above) is accessed. The attribute can be applied to any number of lifetime and type parameters. In the following example, we assert that we access no data behind a reference of lifetime 'b and that the only uses of T will be moves or drops, but omit the attribute from 'a and U, because we do access data with that lifetime and that type: use std::fmt::Display; struct Inspector<'a, 'b, T, U: Display>(&'a u8, &'b u8, T, U); unsafe impl<'a, #[may_dangle] 'b, #[may_dangle] T, U: Display> Drop for Inspector<'a, 'b, T, U> { fn drop(&mut self) { println!(\"Inspector({}, _, _, {})\", self.0, self.3); }\n} It is sometimes obvious that no such access can occur, like the case above. However, when dealing with a generic type parameter, such access can occur indirectly. Examples of such indirect access are: invoking a callback, via a trait method call. (Future changes to the language, such as impl specialization, may add other avenues for such indirect access.) Here is an example of invoking a callback: struct Inspector(T, &'static str, Box fn(&'r T) -> String>); impl Drop for Inspector { fn drop(&mut self) { // The `self.2` call could access a borrow e.g. if `T` is `&'a _`. println!(\"Inspector({}, {}) unwittingly inspects expired data.\", (self.2)(&self.0), self.1); }\n} Here is an example of a trait method call: use std::fmt; struct Inspector(T, &'static str); impl Drop for Inspector { fn drop(&mut self) { // There is a hidden call to `::fmt` below, which // could access a borrow e.g. if `T` is `&'a _` println!(\"Inspector({}, {}) unwittingly inspects expired data.\", self.0, self.1); }\n} And of course, all of these accesses could be further hidden within some other method invoked by the destructor, rather than being written directly within it. In all of the above cases where the &'a u8 is accessed in the destructor, adding the #[may_dangle] attribute makes the type vulnerable to misuse that the borrower checker will not catch, inviting havoc. It is better to avoid adding the attribute.","breadcrumbs":"所有権とライフタイム » An Escape Hatch","id":"30","title":"An Escape Hatch"},"31":{"body":"It turns out that when writing unsafe code, we generally don't need to worry at all about doing the right thing for the drop checker. However there is one special case that you need to worry about, which we will look at in the next section.","breadcrumbs":"所有権とライフタイム » Is that all about drop checker?","id":"31","title":"Is that all about drop checker?"},"32":{"body":"When working with unsafe code, we can often end up in a situation where types or lifetimes are logically associated with a struct, but not actually part of a field. This most commonly occurs with lifetimes. For instance, the Iter for &'a [T] is (approximately) defined as follows: struct Iter<'a, T: 'a> { ptr: *const T, end: *const T,\n} However because 'a is unused within the struct's body, it's unbounded . Because of the troubles this has historically caused, unbounded lifetimes and types are forbidden in struct definitions. Therefore we must somehow refer to these types in the body. Correctly doing this is necessary to have correct variance and drop checking. We do this using PhantomData, which is a special marker type. PhantomData consumes no space, but simulates a field of the given type for the purpose of static analysis. This was deemed to be less error-prone than explicitly telling the type-system the kind of variance that you want, while also providing other useful such as the information needed by drop check. Iter logically contains a bunch of &'a Ts, so this is exactly what we tell the PhantomData to simulate: use std::marker; struct Iter<'a, T: 'a> { ptr: *const T, end: *const T, _marker: marker::PhantomData<&'a T>,\n} and that's it. The lifetime will be bounded, and your iterator will be variant over 'a and T. Everything Just Works. Another important example is Vec, which is (approximately) defined as follows: struct Vec { data: *const T, // *const for variance! len: usize, cap: usize,\n} Unlike the previous example, it appears that everything is exactly as we want. Every generic argument to Vec shows up in at least one field. Good to go! Nope. The drop checker will generously determine that Vec does not own any values of type T. This will in turn make it conclude that it doesn't need to worry about Vec dropping any T's in its destructor for determining drop check soundness. This will in turn allow people to create unsoundness using Vec's destructor. In order to tell dropck that we do own values of type T, and therefore may drop some T's when we drop, we must add an extra PhantomData saying exactly that: use std::marker; struct Vec { data: *const T, // *const for covariance! len: usize, cap: usize, _marker: marker::PhantomData,\n} Raw pointers that own an allocation is such a pervasive pattern that the standard library made a utility for itself called Unique which: wraps a *const T for variance includes a PhantomData auto-derives Send/Sync as if T was contained marks the pointer as NonZero for the null-pointer optimization","breadcrumbs":"所有権とライフタイム » PhantomData","id":"32","title":"PhantomData"},"33":{"body":"The mutual exclusion property of mutable references can be very limiting when working with a composite structure. The borrow checker understands some basic stuff, but will fall over pretty easily. It does understand structs sufficiently to know that it's possible to borrow disjoint fields of a struct simultaneously. So this works today: struct Foo { a: i32, b: i32, c: i32,\n} let mut x = Foo {a: 0, b: 0, c: 0};\nlet a = &mut x.a;\nlet b = &mut x.b;\nlet c = &x.c;\n*b += 1;\nlet c2 = &x.c;\n*a += 10;\nprintln!(\"{} {} {} {}\", a, b, c, c2); However borrowck doesn't understand arrays or slices in any way, so this doesn't work: let mut x = [1, 2, 3];\nlet a = &mut x[0];\nlet b = &mut x[1];\nprintln!(\"{} {}\", a, b); :4:14: 4:18 error: cannot borrow `x[..]` as mutable more than once at a time\n:4 let b = &mut x[1]; ^~~~\n:3:14: 3:18 note: previous borrow of `x[..]` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `x[..]` until the borrow ends\n:3 let a = &mut x[0]; ^~~~\n:6:2: 6:2 note: previous borrow ends here\n:1 fn main() {\n:2 let mut x = [1, 2, 3];\n:3 let a = &mut x[0];\n:4 let b = &mut x[1];\n:5 println!(\"{} {}\", a, b);\n:6 } ^\nerror: aborting due to 2 previous errors While it was plausible that borrowck could understand this simple case, it's pretty clearly hopeless for borrowck to understand disjointness in general container types like a tree, especially if distinct keys actually do map to the same value. In order to \"teach\" borrowck that what we're doing is ok, we need to drop down to unsafe code. For instance, mutable slices expose a split_at_mut function that consumes the slice and returns two mutable slices. One for everything to the left of the index, and one for everything to the right. Intuitively we know this is safe because the slices don't overlap, and therefore alias. However the implementation requires some unsafety: fn split_at_mut(&mut self, mid: usize) -> (&mut [T], &mut [T]) { let len = self.len(); let ptr = self.as_mut_ptr(); assert!(mid <= len); unsafe { (from_raw_parts_mut(ptr, mid), from_raw_parts_mut(ptr.offset(mid as isize), len - mid)) }\n} This is actually a bit subtle. So as to avoid ever making two &mut's to the same value, we explicitly construct brand-new slices through raw pointers. However more subtle is how iterators that yield mutable references work. The iterator trait is defined as follows: trait Iterator { type Item; fn next(&mut self) -> Option;\n} Given this definition, Self::Item has no connection to self. This means that we can call next several times in a row, and hold onto all the results concurrently . This is perfectly fine for by-value iterators, which have exactly these semantics. It's also actually fine for shared references, as they admit arbitrarily many references to the same thing (although the iterator needs to be a separate object from the thing being shared). But mutable references make this a mess. At first glance, they might seem completely incompatible with this API, as it would produce multiple mutable references to the same object! However it actually does work, exactly because iterators are one-shot objects. Everything an IterMut yields will be yielded at most once, so we don't actually ever yield multiple mutable references to the same piece of data. Perhaps surprisingly, mutable iterators don't require unsafe code to be implemented for many types! For instance here's a singly linked list: # fn main() {}\ntype Link = Option>>; struct Node { elem: T, next: Link,\n} pub struct LinkedList { head: Link,\n} pub struct IterMut<'a, T: 'a>(Option<&'a mut Node>); impl LinkedList { fn iter_mut(&mut self) -> IterMut { IterMut(self.head.as_mut().map(|node| &mut **node)) }\n} impl<'a, T> Iterator for IterMut<'a, T> { type Item = &'a mut T; fn next(&mut self) -> Option { self.0.take().map(|node| { self.0 = node.next.as_mut().map(|node| &mut **node); &mut node.elem }) }\n} Here's a mutable slice: # fn main() {}\nuse std::mem; pub struct IterMut<'a, T: 'a>(&'a mut[T]); impl<'a, T> Iterator for IterMut<'a, T> { type Item = &'a mut T; fn next(&mut self) -> Option { let slice = mem::replace(&mut self.0, &mut []); if slice.is_empty() { return None; } let (l, r) = slice.split_at_mut(1); self.0 = r; l.get_mut(0) }\n} impl<'a, T> DoubleEndedIterator for IterMut<'a, T> { fn next_back(&mut self) -> Option { let slice = mem::replace(&mut self.0, &mut []); if slice.is_empty() { return None; } let new_len = slice.len() - 1; let (l, r) = slice.split_at_mut(new_len); self.0 = l; r.get_mut(0) }\n} And here's a binary tree: # fn main() {}\nuse std::collections::VecDeque; type Link = Option>>; struct Node { elem: T, left: Link, right: Link,\n} pub struct Tree { root: Link,\n} struct NodeIterMut<'a, T: 'a> { elem: Option<&'a mut T>, left: Option<&'a mut Node>, right: Option<&'a mut Node>,\n} enum State<'a, T: 'a> { Elem(&'a mut T), Node(&'a mut Node),\n} pub struct IterMut<'a, T: 'a>(VecDeque>); impl Tree { pub fn iter_mut(&mut self) -> IterMut { let mut deque = VecDeque::new(); self.root.as_mut().map(|root| deque.push_front(root.iter_mut())); IterMut(deque) }\n} impl Node { pub fn iter_mut(&mut self) -> NodeIterMut { NodeIterMut { elem: Some(&mut self.elem), left: self.left.as_mut().map(|node| &mut **node), right: self.right.as_mut().map(|node| &mut **node), } }\n} impl<'a, T> Iterator for NodeIterMut<'a, T> { type Item = State<'a, T>; fn next(&mut self) -> Option { match self.left.take() { Some(node) => Some(State::Node(node)), None => match self.elem.take() { Some(elem) => Some(State::Elem(elem)), None => match self.right.take() { Some(node) => Some(State::Node(node)), None => None, } } } }\n} impl<'a, T> DoubleEndedIterator for NodeIterMut<'a, T> { fn next_back(&mut self) -> Option { match self.right.take() { Some(node) => Some(State::Node(node)), None => match self.elem.take() { Some(elem) => Some(State::Elem(elem)), None => match self.left.take() { Some(node) => Some(State::Node(node)), None => None, } } } }\n} impl<'a, T> Iterator for IterMut<'a, T> { type Item = &'a mut T; fn next(&mut self) -> Option { loop { match self.0.front_mut().and_then(|node_it| node_it.next()) { Some(State::Elem(elem)) => return Some(elem), Some(State::Node(node)) => self.0.push_front(node.iter_mut()), None => if let None = self.0.pop_front() { return None }, } } }\n} impl<'a, T> DoubleEndedIterator for IterMut<'a, T> { fn next_back(&mut self) -> Option { loop { match self.0.back_mut().and_then(|node_it| node_it.next_back()) { Some(State::Elem(elem)) => return Some(elem), Some(State::Node(node)) => self.0.push_back(node.iter_mut()), None => if let None = self.0.pop_back() { return None }, } } }\n} All of these are completely safe and work on stable Rust! This ultimately falls out of the simple struct case we saw before: Rust understands that you can safely split a mutable reference into subfields. We can then encode permanently consuming a reference via Options (or in the case of slices, replacing with an empty slice).","breadcrumbs":"所有権とライフタイム » Splitting Borrows","id":"33","title":"Splitting Borrows"},"34":{"body":"At the end of the day, everything is just a pile of bits somewhere, and type systems are just there to help us use those bits right. There are two common problems with typing bits: needing to reinterpret those exact bits as a different type, and needing to change the bits to have equivalent meaning for a different type. Because Rust encourages encoding important properties in the type system, these problems are incredibly pervasive. As such, Rust consequently gives you several ways to solve them. First we'll look at the ways that Safe Rust gives you to reinterpret values. The most trivial way to do this is to just destructure a value into its constituent parts and then build a new type out of them. e.g. struct Foo { x: u32, y: u16,\n} struct Bar { a: u32, b: u16,\n} fn reinterpret(foo: Foo) -> Bar { let Foo { x, y } = foo; Bar { a: x, b: y }\n} But this is, at best, annoying. For common conversions, Rust provides more ergonomic alternatives.","breadcrumbs":"Type Conversions","id":"34","title":"Type Conversions"},"35":{"body":"Types can implicitly be coerced to change in certain contexts. These changes are generally just weakening of types, largely focused around pointers and lifetimes. They mostly exist to make Rust \"just work\" in more cases, and are largely harmless. Here's all the kinds of coercion: Coercion is allowed between the following types: Transitivity: T_1 to T_3 where T_1 coerces to T_2 and T_2 coerces to T_3 Pointer Weakening: &mut T to &T *mut T to *const T &T to *const T &mut T to *mut T Unsizing: T to U if T implements CoerceUnsized Deref coercion: Expression &x of type &T to &*x of type &U if T derefs to U (i.e. T: Deref) CoerceUnsized> for Pointer where T: Unsize is implemented for all pointer types (including smart pointers like Box and Rc). Unsize is only implemented automatically, and enables the following transformations: [T; n] => [T] T => Trait where T: Trait Foo<..., T, ...> => Foo<..., U, ...> where: T: Unsize Foo is a struct Only the last field of Foo has type involving T T is not part of the type of any other fields Bar: Unsize>, if the last field of Foo has type Bar Coercions occur at a coercion site . Any location that is explicitly typed will cause a coercion to its type. If inference is necessary, the coercion will not be performed. Exhaustively, the coercion sites for an expression e to type U are: let statements, statics, and consts: let x: U = e Arguments to functions: takes_a_U(e) Any expression that will be returned: fn foo() -> U { e } Struct literals: Foo { some_u: e } Array literals: let x: [U; 10] = [e, ..] Tuple literals: let x: (U, ..) = (e, ..) The last expression in a block: let x: U = { ..; e } Note that we do not perform coercions when matching traits (except for receivers, see below). If there is an impl for some type U and T coerces to U, that does not constitute an implementation for T. For example, the following will not type check, even though it is OK to coerce t to &T and there is an impl for &T: trait Trait {} fn foo(t: X) {} impl<'a> Trait for &'a i32 {} fn main() { let t: &mut i32 = &mut 0; foo(t);\n} :10:5: 10:8 error: the trait bound `&mut i32 : Trait` is not satisfied [E0277]\n:10 foo(t); ^~~","breadcrumbs":"Type Conversions » Coercions","id":"35","title":"Coercions"},"36":{"body":"The dot operator will perform a lot of magic to convert types. It will perform auto-referencing, auto-dereferencing, and coercion until types match. TODO: steal information from http://stackoverflow.com/questions/28519997/what-are-rusts-exact-auto-dereferencing-rules/28552082#28552082","breadcrumbs":"Type Conversions » The Dot Operator","id":"36","title":"The Dot Operator"},"37":{"body":"Casts are a superset of coercions: every coercion can be explicitly invoked via a cast. However some conversions require a cast. While coercions are pervasive and largely harmless, these \"true casts\" are rare and potentially dangerous. As such, casts must be explicitly invoked using the as keyword: expr as Type. True casts generally revolve around raw pointers and the primitive numeric types. Even though they're dangerous, these casts are infallible at runtime. If a cast triggers some subtle corner case no indication will be given that this occurred. The cast will simply succeed. That said, casts must be valid at the type level, or else they will be prevented statically. For instance, 7u8 as bool will not compile. That said, casts aren't unsafe because they generally can't violate memory safety on their own . For instance, converting an integer to a raw pointer can very easily lead to terrible things. However the act of creating the pointer itself is safe, because actually using a raw pointer is already marked as unsafe. Here's an exhaustive list of all the true casts. For brevity, we will use * to denote either a *const or *mut, and integer to denote any integral primitive: *T as *U where T, U: Sized *T as *U TODO: explain unsized situation *T as integer integer as *T number as number C-like-enum as integer bool as integer char as integer u8 as char &[T; n] as *const T fn as *T where T: Sized fn as integer Note that lengths are not adjusted when casting raw slices - *const [u16] as *const [u8] creates a slice that only includes half of the original memory. Casting is not transitive, that is, even if e as U1 as U2 is a valid expression, e as U2 is not necessarily so. For numeric casts, there are quite a few cases to consider: casting between two integers of the same size (e.g. i32 -> u32) is a no-op casting from a larger integer to a smaller integer (e.g. u32 -> u8) will truncate casting from a smaller integer to a larger integer (e.g. u8 -> u32) will zero-extend if the source is unsigned sign-extend if the source is signed casting from a float to an integer will round the float towards zero NOTE: currently this will cause Undefined Behavior if the rounded value cannot be represented by the target integer type . This includes Inf and NaN. This is a bug and will be fixed. casting from an integer to float will produce the floating point representation of the integer, rounded if necessary (rounding strategy unspecified) casting from an f32 to an f64 is perfect and lossless casting from an f64 to an f32 will produce the closest possible value (rounding strategy unspecified) NOTE: currently this will cause Undefined Behavior if the value is finite but larger or smaller than the largest or smallest finite value representable by f32 . This is a bug and will be fixed.","breadcrumbs":"Type Conversions » Casts","id":"37","title":"Casts"},"38":{"body":"Get out of our way type system! We're going to reinterpret these bits or die trying! Even though this book is all about doing things that are unsafe, I really can't emphasize that you should deeply think about finding Another Way than the operations covered in this section. This is really, truly, the most horribly unsafe thing you can do in Rust. The railguards here are dental floss. mem::transmute takes a value of type T and reinterprets it to have type U. The only restriction is that the T and U are verified to have the same size. The ways to cause Undefined Behavior with this are mind boggling. First and foremost, creating an instance of any type with an invalid state is going to cause arbitrary chaos that can't really be predicted. Transmute has an overloaded return type. If you do not specify the return type it may produce a surprising type to satisfy inference. Making a primitive with an invalid value is UB Transmuting between non-repr(C) types is UB Transmuting an & to &mut is UB Transmuting an & to &mut is always UB No you can't do it No you're not special Transmuting to a reference without an explicitly provided lifetime produces an unbounded lifetime mem::transmute_copy somehow manages to be even more wildly unsafe than this. It copies size_of bytes out of an &T and interprets them as a U. The size check that mem::transmute has is gone (as it may be valid to copy out a prefix), though it is Undefined Behavior for U to be larger than T. Also of course you can get most of the functionality of these functions using pointer casts.","breadcrumbs":"Type Conversions » Transmutes","id":"38","title":"Transmutes"},"39":{"body":"All runtime-allocated memory in a Rust program begins its life as uninitialized . In this state the value of the memory is an indeterminate pile of bits that may or may not even reflect a valid state for the type that is supposed to inhabit that location of memory. Attempting to interpret this memory as a value of any type will cause Undefined Behavior. Do Not Do This. Rust provides mechanisms to work with uninitialized memory in checked (safe) and unchecked (unsafe) ways.","breadcrumbs":"Working With Uninitialized Memory","id":"39","title":"Working With Uninitialized Memory"},"4":{"body":"たいていの場合、アンセーフな Rust を扱うツールは、限定された状況やバイナリでしか使えないようになっています。 残念なことに、現実はそれよりも極めて複雑です。例えば、以下の簡単な関数を見てみましょう。 fn index(idx: usize, arr: &[u8]) -> Option { if idx < arr.len() { unsafe { Some(*arr.get_unchecked(idx)) } } else { None }\n} この関数は明らかに安全です。インデックスが範囲内である事をチェックし、 範囲内であれば未チェックで配列をインデックス参照します。 しかしこのような自明な関数でさえも、unsafe ブロックのスコープには疑問が残ります。 < を <= に変えてみましょう。 fn index(idx: usize, arr: &[u8]) -> Option { if idx <= arr.len() { unsafe { Some(*arr.get_unchecked(idx)) } } else { None }\n} 安全なコードを変更しただけなのに 、今やこのプログラムは安全ではなくなりました。 これが安全性の本質的な問題です。局所的ではないのです。 アンセーフな操作の健全性は、通常 \"安全\" な操作によって構築された状態に依存しているのです。 安全性は、アンセーフな操作をしたからといってあらゆる他の悪い事を考慮する必要はない、という意味ではモジュール化されています。 例えば、スライスに対して未チェックのインデックスアクセスをしても、スライスがヌルだったらどうしようとか、 スライスが未初期化のメモリを含んでいるかもといった心配をする必要はありません。基本的には何も変わりません。 しかし、プログラムは本質的にステートフルであり、アンセーフな操作はその他の任意の状態に依存しているかもしれない、 という意味で、安全性はモジュール化 されてはいない のです。 実際にステートフルな状況を考えると、事態はもっと厄介になります。 Vec の簡単な実装を見てみましょう。 use std::ptr; // この定義は不完全であることに注意してください。Vec の実装に関するセクションをみてください。\npub struct Vec { ptr: *mut T, len: usize, cap: usize,\n} // この実装ではサイズが 0 の型を正しく扱えないことに注意してください。\n// ここでは、すべてが 0 以上の固定サイズの型しか存在しない素敵な仮想的な世界を仮定します。\nimpl Vec { pub fn push(&mut self, elem: T) { if self.len == self.cap { // この例では重要ではありません。 self.reallocate(); } unsafe { ptr::write(self.ptr.offset(self.len as isize), elem); self.len += 1; } } # fn reallocate(&mut self) { }\n} # fn main() {} このコードはとてもシンプルなので、それなりに監査して検証できるでしょう。 それでは次のメソッドを追加してみましょう。 fn make_room(&mut self) { // キャパシティを大きくする self.cap += 1;\n} このコードは 100% 安全な Rust ですが、同時に完全に不健全です。 キャパシティの変更は、Vec の普遍条件(cap は Vec にアロケートされたスペースを表している)を破ることになります。 Vec の他のコードはこれを防げません。 Vec は cap フィールドを検証できないので、 信頼しなくてはならない のです。 unsafe は関数そのものを汚染するだけでなく、 モジュール 全体を汚染します。 一般的に、アンセーフなコードのスコープを制限する唯一で完全無欠の方法は、モジュール境界での非公開性を利用することです。 しかしこれは 完璧な やり方です。 make_room は、public メソッドではないので、Vec の健全性の問題にはなりません。 この関数を定義しているモジュールだけがこの関数を呼べるのです。 また、make_room は Vec の private フィールドを直接アクセスしているので、 Vec と同じモジュールでのみ定義できます。 このように、複雑な普遍条件に依存した安全な抽象化を提供することは可能なのです。 これは安全な Rust とアンセーフな Rust の関係において 決定的に 重要です。 すでに見たように、アンセーフなコードは 特定 の安全なコードを信頼しなくてはなりませんが、 安全なコード 一般 を信頼することはできません。 安全なコードを書くときには気にする必要はないのですが、アンセーフなコードでは、 トレイトの任意の実装や渡された任意の関数が行儀よく振る舞うことを期待することはできないのです。 しかし、安全なコードが状態をあらゆる方法でぐちゃぐちゃにすることを、アンセーフなコードが防げないのだとしたら、 安全性とは絵に描いた餅かもしれません。 ありがたいことに、非公開性を利用することで、 任意のコードが重要な状態をめちゃくちゃにしないよう防ぐことが できる のです。 安全性は無事です!","breadcrumbs":"安全と危険のご紹介 » Unsafe と連携する","id":"4","title":"Unsafe と連携する"},"40":{"body":"Like C, all stack variables in Rust are uninitialized until a value is explicitly assigned to them. Unlike C, Rust statically prevents you from ever reading them until you do: fn main() { let x: i32; println!(\"{}\", x);\n} src/main.rs:3:20: 3:21 error: use of possibly uninitialized variable: `x`\nsrc/main.rs:3 println!(\"{}\", x); ^ This is based off of a basic branch analysis: every branch must assign a value to x before it is first used. Interestingly, Rust doesn't require the variable to be mutable to perform a delayed initialization if every branch assigns exactly once. However the analysis does not take advantage of constant analysis or anything like that. So this compiles: fn main() { let x: i32; if true { x = 1; } else { x = 2; } println!(\"{}\", x);\n} but this doesn't: fn main() { let x: i32; if true { x = 1; } println!(\"{}\", x);\n} src/main.rs:6:17: 6:18 error: use of possibly uninitialized variable: `x`\nsrc/main.rs:6 println!(\"{}\", x); while this does: fn main() { let x: i32; if true { x = 1; println!(\"{}\", x); } // Don't care that there are branches where it's not initialized // since we don't use the value in those branches\n} Of course, while the analysis doesn't consider actual values, it does have a relatively sophisticated understanding of dependencies and control flow. For instance, this works: let x: i32; loop { // Rust doesn't understand that this branch will be taken unconditionally, // because it relies on actual values. if true { // But it does understand that it will only be taken once because // we unconditionally break out of it. Therefore `x` doesn't // need to be marked as mutable. x = 0; break; }\n}\n// It also knows that it's impossible to get here without reaching the break.\n// And therefore that `x` must be initialized here!\nprintln!(\"{}\", x); If a value is moved out of a variable, that variable becomes logically uninitialized if the type of the value isn't Copy. That is: fn main() { let x = 0; let y = Box::new(0); let z1 = x; // x is still valid because i32 is Copy let z2 = y; // y is now logically uninitialized because Box isn't Copy\n} However reassigning y in this example would require y to be marked as mutable, as a Safe Rust program could observe that the value of y changed: fn main() { let mut y = Box::new(0); let z = y; // y is now logically uninitialized because Box isn't Copy y = Box::new(1); // reinitialize y\n} Otherwise it's like y is a brand new variable.","breadcrumbs":"Uninitialized Memory » Checked Uninitialized Memory","id":"40","title":"Checked Uninitialized Memory"},"41":{"body":"The examples in the previous section introduce an interesting problem for Rust. We have seen that it's possible to conditionally initialize, deinitialize, and reinitialize locations of memory totally safely. For Copy types, this isn't particularly notable since they're just a random pile of bits. However types with destructors are a different story: Rust needs to know whether to call a destructor whenever a variable is assigned to, or a variable goes out of scope. How can it do this with conditional initialization? Note that this is not a problem that all assignments need worry about. In particular, assigning through a dereference unconditionally drops, and assigning in a let unconditionally doesn't drop: let mut x = Box::new(0); // let makes a fresh variable, so never need to drop\nlet y = &mut x;\n*y = Box::new(1); // Deref assumes the referent is initialized, so always drops This is only a problem when overwriting a previously initialized variable or one of its subfields. It turns out that Rust actually tracks whether a type should be dropped or not at runtime . As a variable becomes initialized and uninitialized, a drop flag for that variable is toggled. When a variable might need to be dropped, this flag is evaluated to determine if it should be dropped. Of course, it is often the case that a value's initialization state can be statically known at every point in the program. If this is the case, then the compiler can theoretically generate more efficient code! For instance, straight- line code has such static drop semantics : let mut x = Box::new(0); // x was uninit; just overwrite.\nlet mut y = x; // y was uninit; just overwrite and make x uninit.\nx = Box::new(0); // x was uninit; just overwrite.\ny = x; // y was init; Drop y, overwrite it, and make x uninit! // y goes out of scope; y was init; Drop y! // x goes out of scope; x was uninit; do nothing. Similarly, branched code where all branches have the same behavior with respect to initialization has static drop semantics: # let condition = true;\nlet mut x = Box::new(0); // x was uninit; just overwrite.\nif condition { drop(x) // x gets moved out; make x uninit.\n} else { println!(\"{}\", x); drop(x) // x gets moved out; make x uninit.\n}\nx = Box::new(0); // x was uninit; just overwrite. // x goes out of scope; x was init; Drop x! However code like this requires runtime information to correctly Drop: # let condition = true;\nlet x;\nif condition { x = Box::new(0); // x was uninit; just overwrite. println!(\"{}\", x);\n} // x goes out of scope; x might be uninit; // check the flag! Of course, in this case it's trivial to retrieve static drop semantics: # let condition = true;\nif condition { let x = Box::new(0); println!(\"{}\", x);\n} The drop flags are tracked on the stack and no longer stashed in types that implement drop.","breadcrumbs":"Uninitialized Memory » Drop Flags","id":"41","title":"Drop Flags"},"42":{"body":"One interesting exception to this rule is working with arrays. Safe Rust doesn't permit you to partially initialize an array. When you initialize an array, you can either set every value to the same thing with let x = [val; N], or you can specify each member individually with let x = [val1, val2, val3]. Unfortunately this is pretty rigid, especially if you need to initialize your array in a more incremental or dynamic way. Unsafe Rust gives us a powerful tool to handle this problem: mem::uninitialized. This function pretends to return a value when really it does nothing at all. Using it, we can convince Rust that we have initialized a variable, allowing us to do trickier things with conditional and incremental initialization. Unfortunately, this opens us up to all kinds of problems. Assignment has a different meaning to Rust based on whether it believes that a variable is initialized or not. If it's believed uninitialized, then Rust will semantically just memcopy the bits over the uninitialized ones, and do nothing else. However if Rust believes a value to be initialized, it will try to Drop the old value! Since we've tricked Rust into believing that the value is initialized, we can no longer safely use normal assignment. This is also a problem if you're working with a raw system allocator, which returns a pointer to uninitialized memory. To handle this, we must use the ptr module. In particular, it provides three functions that allow us to assign bytes to a location in memory without dropping the old value: write, copy, and copy_nonoverlapping. ptr::write(ptr, val) takes a val and moves it into the address pointed to by ptr. ptr::copy(src, dest, count) copies the bits that count T's would occupy from src to dest. (this is equivalent to memmove -- note that the argument order is reversed!) ptr::copy_nonoverlapping(src, dest, count) does what copy does, but a little faster on the assumption that the two ranges of memory don't overlap. (this is equivalent to memcpy -- note that the argument order is reversed!) It should go without saying that these functions, if misused, will cause serious havoc or just straight up Undefined Behavior. The only things that these functions themselves require is that the locations you want to read and write are allocated. However the ways writing arbitrary bits to arbitrary locations of memory can break things are basically uncountable! Putting this all together, we get the following: use std::mem;\nuse std::ptr; // size of the array is hard-coded but easy to change. This means we can't\n// use [a, b, c] syntax to initialize the array, though!\nconst SIZE: usize = 10; let mut x: [Box; SIZE]; unsafe { // convince Rust that x is Totally Initialized x = mem::uninitialized(); for i in 0..SIZE { // very carefully overwrite each index without reading it // NOTE: exception safety is not a concern; Box can't panic ptr::write(&mut x[i], Box::new(i as u32)); }\n} println!(\"{:?}\", x); It's worth noting that you don't need to worry about ptr::write-style shenanigans with types which don't implement Drop or contain Drop types, because Rust knows not to try to drop them. Similarly you should be able to assign to fields of partially initialized structs directly if those fields don't contain any Drop types. However when working with uninitialized memory you need to be ever-vigilant for Rust trying to drop values you make like this before they're fully initialized. Every control path through that variable's scope must initialize the value before it ends, if it has a destructor. This includes code panicking . And that's about it for working with uninitialized memory! Basically nothing anywhere expects to be handed uninitialized memory, so if you're going to pass it around at all, be sure to be really careful.","breadcrumbs":"Uninitialized Memory » Unchecked Uninitialized Memory","id":"42","title":"Unchecked Uninitialized Memory"},"43":{"body":"OBRM(またの名をRAII: Resource Acquisition Is Initialization)とは、Rustにおいて 関連性の深い概念です。特に標準ライブラリと密接に関与します。 このパターンを簡単に説明すると次のようになります。「リソースを獲得するには そのリソースを管理するオブジェクトを作成し、リソースを解放するにはその オブジェクトを単に破棄すればリソースがクリーンアップされる。」 いうものです。このように管理される最も一般的な「リソース」は単なる メモリ です。 Box、Rc、その他std::collectionsの諸々全ては、メモリの管理を便利にするためのものです。 Rustの場合、メモリの管理において一貫したGCに頼るということができないので、これら は特に重要になります。大事なことなので強調しましょう。この「管理」という考え方は Rustの根幹です。それは何もメモリに限った話ではありません。スレッド、ファイル、 ソケットといったほぼ全てのリソースがこういった考え方に基づくAPIを通して扱うように できています。","breadcrumbs":"所有権に基づいたリソース管理(Ownership Based Resource Management, OBRM)の危険性について","id":"43","title":"所有権に基づいたリソース管理(Ownership Based Resource Management, OBRM)の危険性について"},"44":{"body":"There is exactly one way to create an instance of a user-defined type: name it, and initialize all its fields at once: struct Foo { a: u8, b: u32, c: bool,\n} enum Bar { X(u32), Y(bool),\n} struct Unit; let foo = Foo { a: 0, b: 1, c: false };\nlet bar = Bar::X(0);\nlet empty = Unit; That's it. Every other way you make an instance of a type is just calling a totally vanilla function that does some stuff and eventually bottoms out to The One True Constructor. Unlike C++, Rust does not come with a slew of built-in kinds of constructor. There are no Copy, Default, Assignment, Move, or whatever constructors. The reasons for this are varied, but it largely boils down to Rust's philosophy of being explicit . Move constructors are meaningless in Rust because we don't enable types to \"care\" about their location in memory. Every type must be ready for it to be blindly memcopied to somewhere else in memory. This means pure on-the-stack-but- still-movable intrusive linked lists are simply not happening in Rust (safely). Assignment and copy constructors similarly don't exist because move semantics are the only semantics in Rust. At most x = y just moves the bits of y into the x variable. Rust does provide two facilities for providing C++'s copy- oriented semantics: Copy and Clone. Clone is our moral equivalent of a copy constructor, but it's never implicitly invoked. You have to explicitly call clone on an element you want to be cloned. Copy is a special case of Clone where the implementation is just \"copy the bits\". Copy types are implicitly cloned whenever they're moved, but because of the definition of Copy this just means not treating the old copy as uninitialized -- a no-op. While Rust provides a Default trait for specifying the moral equivalent of a default constructor, it's incredibly rare for this trait to be used. This is because variables aren't implicitly initialized . Default is basically only useful for generic programming. In concrete contexts, a type will provide a static new method for any kind of \"default\" constructor. This has no relation to new in other languages and has no special meaning. It's just a naming convention. TODO: talk about \"placement new\"?","breadcrumbs":"所有権に基づいたリソース管理 » Constructors","id":"44","title":"Constructors"},"45":{"body":"What the language does provide is full-blown automatic destructors through the Drop trait, which provides the following method: fn drop(&mut self); This method gives the type time to somehow finish what it was doing. After drop is run, Rust will recursively try to drop all of the fields of self. This is a convenience feature so that you don't have to write \"destructor boilerplate\" to drop children. If a struct has no special logic for being dropped other than dropping its children, then it means Drop doesn't need to be implemented at all! There is no stable way to prevent this behavior in Rust 1.0. Note that taking &mut self means that even if you could suppress recursive Drop, Rust will prevent you from e.g. moving fields out of self. For most types, this is totally fine. For instance, a custom implementation of Box might write Drop like this: #![feature(alloc, heap_api, unique)] extern crate alloc; use std::ptr::{drop_in_place, Unique};\nuse std::mem; use alloc::heap; struct Box{ ptr: Unique } impl Drop for Box { fn drop(&mut self) { unsafe { drop_in_place(*self.ptr); heap::deallocate((*self.ptr) as *mut u8, mem::size_of::(), mem::align_of::()); } }\n}\n# fn main() {} and this works fine because when Rust goes to drop the ptr field it just sees a Unique that has no actual Drop implementation. Similarly nothing can use-after-free the ptr because when drop exits, it becomes inaccessible. However this wouldn't work: #![feature(alloc, heap_api, unique)] extern crate alloc; use std::ptr::{drop_in_place, Unique};\nuse std::mem; use alloc::heap; struct Box{ ptr: Unique } impl Drop for Box { fn drop(&mut self) { unsafe { drop_in_place(*self.ptr); heap::deallocate((*self.ptr) as *mut u8, mem::size_of::(), mem::align_of::()); } }\n} struct SuperBox { my_box: Box } impl Drop for SuperBox { fn drop(&mut self) { unsafe { // Hyper-optimized: deallocate the box's contents for it // without `drop`ing the contents heap::deallocate((*self.my_box.ptr) as *mut u8, mem::size_of::(), mem::align_of::()); } }\n}\n# fn main() {} After we deallocate the box's ptr in SuperBox's destructor, Rust will happily proceed to tell the box to Drop itself and everything will blow up with use-after-frees and double-frees. Note that the recursive drop behavior applies to all structs and enums regardless of whether they implement Drop. Therefore something like struct Boxy { data1: Box, data2: Box, info: u32,\n} will have its data1 and data2's fields destructors whenever it \"would\" be dropped, even though it itself doesn't implement Drop. We say that such a type needs Drop , even though it is not itself Drop. Similarly, enum Link { Next(Box), None,\n} will have its inner Box field dropped if and only if an instance stores the Next variant. In general this works really nicely because you don't need to worry about adding/removing drops when you refactor your data layout. Still there's certainly many valid usecases for needing to do trickier things with destructors. The classic safe solution to overriding recursive drop and allowing moving out of Self during drop is to use an Option: #![feature(alloc, heap_api, unique)] extern crate alloc; use std::ptr::{drop_in_place, Unique};\nuse std::mem; use alloc::heap; struct Box{ ptr: Unique } impl Drop for Box { fn drop(&mut self) { unsafe { drop_in_place(*self.ptr); heap::deallocate((*self.ptr) as *mut u8, mem::size_of::(), mem::align_of::()); } }\n} struct SuperBox { my_box: Option> } impl Drop for SuperBox { fn drop(&mut self) { unsafe { // Hyper-optimized: deallocate the box's contents for it // without `drop`ing the contents. Need to set the `box` // field as `None` to prevent Rust from trying to Drop it. let my_box = self.my_box.take().unwrap(); heap::deallocate((*my_box.ptr) as *mut u8, mem::size_of::(), mem::align_of::()); mem::forget(my_box); } }\n}\n# fn main() {} However this has fairly odd semantics: you're saying that a field that should always be Some may be None, just because that happens in the destructor. Of course this conversely makes a lot of sense: you can call arbitrary methods on self during the destructor, and this should prevent you from ever doing so after deinitializing the field. Not that it will prevent you from producing any other arbitrarily invalid state in there. On balance this is an ok choice. Certainly what you should reach for by default. However, in the future we expect there to be a first-class way to announce that a field shouldn't be automatically dropped.","breadcrumbs":"所有権に基づいたリソース管理 » Destructors","id":"45","title":"Destructors"},"46":{"body":"Ownership-based resource management is intended to simplify composition. You acquire resources when you create the object, and you release the resources when it gets destroyed. Since destruction is handled for you, it means you can't forget to release the resources, and it happens as soon as possible! Surely this is perfect and all of our problems are solved. Everything is terrible and we have new and exotic problems to try to solve. Many people like to believe that Rust eliminates resource leaks. In practice, this is basically true. You would be surprised to see a Safe Rust program leak resources in an uncontrolled way. However from a theoretical perspective this is absolutely not the case, no matter how you look at it. In the strictest sense, \"leaking\" is so abstract as to be unpreventable. It's quite trivial to initialize a collection at the start of a program, fill it with tons of objects with destructors, and then enter an infinite event loop that never refers to it. The collection will sit around uselessly, holding on to its precious resources until the program terminates (at which point all those resources would have been reclaimed by the OS anyway). We may consider a more restricted form of leak: failing to drop a value that is unreachable. Rust also doesn't prevent this. In fact Rust has a function for doing this : mem::forget. This function consumes the value it is passed and then doesn't run its destructor . In the past mem::forget was marked as unsafe as a sort of lint against using it, since failing to call a destructor is generally not a well-behaved thing to do (though useful for some special unsafe code). However this was generally determined to be an untenable stance to take: there are many ways to fail to call a destructor in safe code. The most famous example is creating a cycle of reference-counted pointers using interior mutability. It is reasonable for safe code to assume that destructor leaks do not happen, as any program that leaks destructors is probably wrong. However unsafe code cannot rely on destructors to be run in order to be safe. For most types this doesn't matter: if you leak the destructor then the type is by definition inaccessible, so it doesn't matter, right? For instance, if you leak a Box then you waste some memory but that's hardly going to violate memory-safety. However where we must be careful with destructor leaks are proxy types. These are types which manage access to a distinct object, but don't actually own it. Proxy objects are quite rare. Proxy objects you'll need to care about are even rarer. However we'll focus on three interesting examples in the standard library: vec::Drain Rc thread::scoped::JoinGuard","breadcrumbs":"所有権に基づいたリソース管理 » Leaking","id":"46","title":"Leaking"},"47":{"body":"drain is a collections API that moves data out of the container without consuming the container. This enables us to reuse the allocation of a Vec after claiming ownership over all of its contents. It produces an iterator (Drain) that returns the contents of the Vec by-value. Now, consider Drain in the middle of iteration: some values have been moved out, and others haven't. This means that part of the Vec is now full of logically uninitialized data! We could backshift all the elements in the Vec every time we remove a value, but this would have pretty catastrophic performance consequences. Instead, we would like Drain to fix the Vec's backing storage when it is dropped. It should run itself to completion, backshift any elements that weren't removed (drain supports subranges), and then fix Vec's len. It's even unwinding-safe! Easy! Now consider the following: let mut vec = vec![Box::new(0); 4]; { // start draining, vec can no longer be accessed let mut drainer = vec.drain(..); // pull out two elements and immediately drop them drainer.next(); drainer.next(); // get rid of drainer, but don't call its destructor mem::forget(drainer);\n} // Oops, vec[0] was dropped, we're reading a pointer into free'd memory!\nprintln!(\"{}\", vec[0]); This is pretty clearly Not Good. Unfortunately, we're kind of stuck between a rock and a hard place: maintaining consistent state at every step has an enormous cost (and would negate any benefits of the API). Failing to maintain consistent state gives us Undefined Behavior in safe code (making the API unsound). So what can we do? Well, we can pick a trivially consistent state: set the Vec's len to be 0 when we start the iteration, and fix it up if necessary in the destructor. That way, if everything executes like normal we get the desired behavior with minimal overhead. But if someone has the audacity to mem::forget us in the middle of the iteration, all that does is leak even more (and possibly leave the Vec in an unexpected but otherwise consistent state). Since we've accepted that mem::forget is safe, this is definitely safe. We call leaks causing more leaks a leak amplification .","breadcrumbs":"所有権に基づいたリソース管理 » Drain","id":"47","title":"Drain"},"48":{"body":"Rc is an interesting case because at first glance it doesn't appear to be a proxy value at all. After all, it manages the data it points to, and dropping all the Rcs for a value will drop that value. Leaking an Rc doesn't seem like it would be particularly dangerous. It will leave the refcount permanently incremented and prevent the data from being freed or dropped, but that seems just like Box, right? Nope. Let's consider a simplified implementation of Rc: struct Rc { ptr: *mut RcBox,\n} struct RcBox { data: T, ref_count: usize,\n} impl Rc { fn new(data: T) -> Self { unsafe { // Wouldn't it be nice if heap::allocate worked like this? let ptr = heap::allocate::>(); ptr::write(ptr, RcBox { data: data, ref_count: 1, }); Rc { ptr: ptr } } } fn clone(&self) -> Self { unsafe { (*self.ptr).ref_count += 1; } Rc { ptr: self.ptr } }\n} impl Drop for Rc { fn drop(&mut self) { unsafe { (*self.ptr).ref_count -= 1; if (*self.ptr).ref_count == 0 { // drop the data and then free it ptr::read(self.ptr); heap::deallocate(self.ptr); } } }\n} This code contains an implicit and subtle assumption: ref_count can fit in a usize, because there can't be more than usize::MAX Rcs in memory. However this itself assumes that the ref_count accurately reflects the number of Rcs in memory, which we know is false with mem::forget. Using mem::forget we can overflow the ref_count, and then get it down to 0 with outstanding Rcs. Then we can happily use-after-free the inner data. Bad Bad Not Good. This can be solved by just checking the ref_count and doing something . The standard library's stance is to just abort, because your program has become horribly degenerate. Also oh my gosh it's such a ridiculous corner case.","breadcrumbs":"所有権に基づいたリソース管理 » Rc","id":"48","title":"Rc"},"49":{"body":"The thread::scoped API intends to allow threads to be spawned that reference data on their parent's stack without any synchronization over that data by ensuring the parent joins the thread before any of the shared data goes out of scope. pub fn scoped<'a, F>(f: F) -> JoinGuard<'a> where F: FnOnce() + Send + 'a Here f is some closure for the other thread to execute. Saying that F: Send +'a is saying that it closes over data that lives for 'a, and it either owns that data or the data was Sync (implying &data is Send). Because JoinGuard has a lifetime, it keeps all the data it closes over borrowed in the parent thread. This means the JoinGuard can't outlive the data that the other thread is working on. When the JoinGuard does get dropped it blocks the parent thread, ensuring the child terminates before any of the closed-over data goes out of scope in the parent. Usage looked like: let mut data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];\n{ let guards = vec![]; for x in &mut data { // Move the mutable reference into the closure, and execute // it on a different thread. The closure has a lifetime bound // by the lifetime of the mutable reference `x` we store in it. // The guard that is returned is in turn assigned the lifetime // of the closure, so it also mutably borrows `data` as `x` did. // This means we cannot access `data` until the guard goes away. let guard = thread::scoped(move || { *x *= 2; }); // store the thread's guard for later guards.push(guard); } // All guards are dropped here, forcing the threads to join // (this thread blocks here until the others terminate). // Once the threads join, the borrow expires and the data becomes // accessible again in this thread.\n}\n// data is definitely mutated here. In principle, this totally works! Rust's ownership system perfectly ensures it! ...except it relies on a destructor being called to be safe. let mut data = Box::new(0);\n{ let guard = thread::scoped(|| { // This is at best a data race. At worst, it's also a use-after-free. *data += 1; }); // Because the guard is forgotten, expiring the loan without blocking this // thread. mem::forget(guard);\n}\n// So the Box is dropped here while the scoped thread may or may not be trying\n// to access it. Dang. Here the destructor running was pretty fundamental to the API, and it had to be scrapped in favor of a completely different design.","breadcrumbs":"所有権に基づいたリソース管理 » thread::scoped::JoinGuard","id":"49","title":"thread::scoped::JoinGuard"},"5":{"body":"低レイヤのプログラミングでは、データのレイアウトがとても重要です。本当に重要な問題です。 また言語の残りの部分の多くにわたって影響を及ぼします。 ということで、Rust でどのようにデータが表現されるかを詳しく見るところから始めましょう。","breadcrumbs":"Rust のデータ表現","id":"5","title":"Rust のデータ表現"},"50":{"body":"Rustのエラーハンドリングには 階層的な スキームが存在します。 もし何かが、明確な理由があって欠如しうる場合、Optionが使われます もし何かおかしなことが起こった際に合理的な対処方法がある場合、Resultが使われます もし何かおかしなことが起こった際に合理的な対処方法がない場合、そのスレッドはpanicします もし何か破滅的な出来事が起こった場合、プログラムはabortします 大抵の状況では圧倒的にOptionとResultが好まれます。というのもAPIのユーザーの 裁量次第でpanicやabortさせることも可能だからです。panicはスレッドの正常処理を 停止し、stackをunwind、全ての関数が即座にreturnしたかのようにデストラクタ を呼び出します。 バージョン1.0以降のRustはpanic時に2種類の対処法を用いるようになりました。 大昔、Rustは今よりもErlangによく似ていました。Erlangと同様、Rustには軽量のタスク が存在し、タスクが続行不可能な状態に陥った際にはタスクが自分自身をpanicによって killすることを意図して設計されていました。JavaやC++の例外と違い、panicはいかなる 場合においてもcatchすることはできませんでした。panicをcatchできるのはタスクの オーナーのみであり、その時点で適切にハンドリングされるか、 その タスク (訳注: オーナーとなるタスク)自体がpanicするかのどちらかでした。 この一連の流れの中では、タスクのデスクトラクタが呼ばれなかった場合にメモリー及び その他のシステムリソースがリークを起こす可能性があったため、unwindingが重要でした。 タスクは通常の実行中にも死ぬ可能性があると想定されていたため、Rustのこういった 特徴は長期間実行されるシステムを作る上でとても不適切でした。 Rustが現在の形に近づく過程で、より抽象化を少なくしたいという時流に押された スタイルのプログラミングが確立していき、その過程で軽量のタスクは重量級の OSスレッドに駆逐・統一されました (訳注: いわゆるグリーンスレッドとネイティブスレッドの話)。しかしながら Rust1.0の時点ではpanicはその親スレッドによってのみ補足が可能という仕様であった ため、 panicの補足時にOSのスレッドを丸ごとunwindしてしまう必要 があったのです!不幸なことにこれはゼロコスト抽象化というRustの思想と 真っ向からぶつかってしまいました。 一応 catch_panic というunstableなAPIが存在し、これによってスレッドをspawn することなくpanicを補足することはできます。 訳注: その後 recover -> catch_unwind と変更され、Rust1.9でstableになりました。 とはいえあくまでこれは代替手段として用いることを推奨します。現在のRustのunwind は「unwindしない」ケースに偏った最適化をしています。unwindが発生しないとわかって いれば、プログラムがunwindの 準備 をするためのランタイムコストも無くなるためです。 結果として、実際にはJavaのような言語よりもunwindのコストは高くなっています。 したがって通常の状況ではunwindしないようなプログラムの作成を心がけるべきです。 非常に大きな 問題の発生時やプログラミングエラーに対してのみpanicすべきです。 Rustのunwindの取り扱い方針は、他の言語のそれと根本から同等になるように設計されて はいません。したがって他の言語で発生したunwindががRustに波及したり、逆にRustから 多言語に波及したりといった動作は未定義となっています。 FFIの構築時には 絶対に 全てのpanicを境界部でキャッチしなくてはなりません。 キャッチの結果どのように対処するかはプログラマ次第ですが、とにかく 何か を しなくてはなりません。そうしなければ、良くてアプリケーションがクラッシュ・炎上します。 最悪のケースではアプリケーションがクラッシュ・炎上 しません 。完全にボロボロの状態 のまま走り続けます。","breadcrumbs":"Unwinding","id":"50","title":"Unwinding"},"51":{"body":"Although programs should use unwinding sparingly, there's a lot of code that can panic. If you unwrap a None, index out of bounds, or divide by 0, your program will panic. On debug builds, every arithmetic operation can panic if it overflows. Unless you are very careful and tightly control what code runs, pretty much everything can unwind, and you need to be ready for it. Being ready for unwinding is often referred to as exception safety in the broader programming world. In Rust, there are two levels of exception safety that one may concern themselves with: In unsafe code, we must be exception safe to the point of not violating memory safety. We'll call this minimal exception safety. In safe code, it is good to be exception safe to the point of your program doing the right thing. We'll call this maximal exception safety. As is the case in many places in Rust, Unsafe code must be ready to deal with bad Safe code when it comes to unwinding. Code that transiently creates unsound states must be careful that a panic does not cause that state to be used. Generally this means ensuring that only non-panicking code is run while these states exist, or making a guard that cleans up the state in the case of a panic. This does not necessarily mean that the state a panic witnesses is a fully coherent state. We need only guarantee that it's a safe state. Most Unsafe code is leaf-like, and therefore fairly easy to make exception-safe. It controls all the code that runs, and most of that code can't panic. However it is not uncommon for Unsafe code to work with arrays of temporarily uninitialized data while repeatedly invoking caller-provided code. Such code needs to be careful and consider exception safety.","breadcrumbs":"Unwinding » Exception Safety","id":"51","title":"Exception Safety"},"52":{"body":"Vec::push_all is a temporary hack to get extending a Vec by a slice reliably efficient without specialization. Here's a simple implementation: impl Vec { fn push_all(&mut self, to_push: &[T]) { self.reserve(to_push.len()); unsafe { // can't overflow because we just reserved this self.set_len(self.len() + to_push.len()); for (i, x) in to_push.iter().enumerate() { self.ptr().offset(i as isize).write(x.clone()); } } }\n} We bypass push in order to avoid redundant capacity and len checks on the Vec that we definitely know has capacity. The logic is totally correct, except there's a subtle problem with our code: it's not exception-safe! set_len, offset, and write are all fine; clone is the panic bomb we over-looked. Clone is completely out of our control, and is totally free to panic. If it does, our function will exit early with the length of the Vec set too large. If the Vec is looked at or dropped, uninitialized memory will be read! The fix in this case is fairly simple. If we want to guarantee that the values we did clone are dropped, we can set the len every loop iteration. If we just want to guarantee that uninitialized memory can't be observed, we can set the len after the loop.","breadcrumbs":"Unwinding » Vec::push_all","id":"52","title":"Vec::push_all"},"53":{"body":"Bubbling an element up a heap is a bit more complicated than extending a Vec. The pseudocode is as follows: bubble_up(heap, index): while index != 0 && heap[index] < heap[parent(index)]: heap.swap(index, parent(index)) index = parent(index) A literal transcription of this code to Rust is totally fine, but has an annoying performance characteristic: the self element is swapped over and over again uselessly. We would rather have the following: bubble_up(heap, index): let elem = heap[index] while index != 0 && element < heap[parent(index)]: heap[index] = heap[parent(index)] index = parent(index) heap[index] = elem This code ensures that each element is copied as little as possible (it is in fact necessary that elem be copied twice in general). However it now exposes some exception safety trouble! At all times, there exists two copies of one value. If we panic in this function something will be double-dropped. Unfortunately, we also don't have full control of the code: that comparison is user-defined! Unlike Vec, the fix isn't as easy here. One option is to break the user-defined code and the unsafe code into two separate phases: bubble_up(heap, index): let end_index = index; while end_index != 0 && heap[end_index] < heap[parent(end_index)]: end_index = parent(end_index) let elem = heap[index] while index != end_index: heap[index] = heap[parent(index)] index = parent(index) heap[index] = elem If the user-defined code blows up, that's no problem anymore, because we haven't actually touched the state of the heap yet. Once we do start messing with the heap, we're working with only data and functions that we trust, so there's no concern of panics. Perhaps you're not happy with this design. Surely it's cheating! And we have to do the complex heap traversal twice ! Alright, let's bite the bullet. Let's intermix untrusted and unsafe code for reals . If Rust had try and finally like in Java, we could do the following: bubble_up(heap, index): let elem = heap[index] try: while index != 0 && element < heap[parent(index)]: heap[index] = heap[parent(index)] index = parent(index) finally: heap[index] = elem The basic idea is simple: if the comparison panics, we just toss the loose element in the logically uninitialized index and bail out. Anyone who observes the heap will see a potentially inconsistent heap, but at least it won't cause any double-drops! If the algorithm terminates normally, then this operation happens to coincide precisely with the how we finish up regardless. Sadly, Rust has no such construct, so we're going to need to roll our own! The way to do this is to store the algorithm's state in a separate struct with a destructor for the \"finally\" logic. Whether we panic or not, that destructor will run and clean up after us. struct Hole<'a, T: 'a> { data: &'a mut [T], /// `elt` is always `Some` from new until drop. elt: Option, pos: usize,\n} impl<'a, T> Hole<'a, T> { fn new(data: &'a mut [T], pos: usize) -> Self { unsafe { let elt = ptr::read(&data[pos]); Hole { data: data, elt: Some(elt), pos: pos, } } } fn pos(&self) -> usize { self.pos } fn removed(&self) -> &T { self.elt.as_ref().unwrap() } unsafe fn get(&self, index: usize) -> &T { &self.data[index] } unsafe fn move_to(&mut self, index: usize) { let index_ptr: *const _ = &self.data[index]; let hole_ptr = &mut self.data[self.pos]; ptr::copy_nonoverlapping(index_ptr, hole_ptr, 1); self.pos = index; }\n} impl<'a, T> Drop for Hole<'a, T> { fn drop(&mut self) { // fill the hole again unsafe { let pos = self.pos; ptr::write(&mut self.data[pos], self.elt.take().unwrap()); } }\n} impl BinaryHeap { fn sift_up(&mut self, pos: usize) { unsafe { // Take out the value at `pos` and create a hole. let mut hole = Hole::new(&mut self.data, pos); while hole.pos() != 0 { let parent = parent(hole.pos()); if hole.removed() <= hole.get(parent) { break } hole.move_to(parent); } // Hole will be unconditionally filled here; panic or not! } }\n}","breadcrumbs":"Unwinding » BinaryHeap::sift_up","id":"53","title":"BinaryHeap::sift_up"},"54":{"body":"全てのunsafeな型は最低限の例外安全性を満たしていることが 必要です が、全ての unsafeな型が 最大限 の例外安全性を満たしている必要はありません。 仮に型自体が満たしていたとしても、実装が別の意味を暗黙に付与してしまう場合も あります。例えば整数型は間違いなく例外安全ですが、その(訳注: 最大限の例外安全性 を担保する)セマンティクスを独自に持つわけではないため、整数をアップデートする 際にpanicを起こすと、プログラムが一貫性のない状態に陥る可能性があります。 これは 通常は 問題になることはありません。というのも例外を発見した処理は直後に 死ぬためです。例えばVecを別のスレッドに送り、そのスレッドがパニックし、結果として Vecが奇妙な状態に陥ったとしても、dropされて永久に闇の彼方に葬られてしまうためです。 とはいえ型によってはpanicの境界をまたいでくる場合もあります。 こういった型は、panicに直面した際に、意図的に自分自身を poison する可能性があり ます。poisoningは自体は特に何か別の事態を引き起こすわけではありません。一般的に 通常の手続きの継続を止めるべきであることを表しています。よく知られた例として 標準ライブラリのMutex型があります。この型は対応するMutexGuards(lockを取得した際に 返るもの)が、panicによってdropされた際に自分自身をpoisonします。以後Mutexをlock しようとするとErrを返すかpanicします。 Mutexのpoisonは、通常の文脈で語られるRustの安全性とは異なる用途のためのものです。 Mutexを扱うスレッドがlock中にパニックを引き起こした場合、Mutexの中のデータは変更中 であった可能性が高く、一貫性を欠いていたり変更が未完了の状態であったりするため、 そのようなデータを盲目的に扱う危険性に対する安全装置として動作します。 注意しておきたいのはそのような型が適切に実装されていた場合、メモリ安全性 は 確実 に満たしているという点です。つまるところ、最低限の例外安全性は満たしていなくては ならないということです。 しかしながら、Mutexが例えばBinaryHeapを持っていたとして、その値が実際にはヒープ として要件を満たさなかったような場合、そのデータ構造を利用するプログラムが作成者の 意図通りの挙動をするということは考えにくいです。通常とは異なる振る舞いをする でしょう。とはいえ、十分に注意すればそのような場合でもその値が 何かに 使える 可能性はあります。safe では あるのです。ただ、ナンセンスかもしれませんが。","breadcrumbs":"Unwinding » Poisoning","id":"54","title":"Poisoning"},"55":{"body":"Rust as a language doesn't really have an opinion on how to do concurrency or parallelism. The standard library exposes OS threads and blocking sys-calls because everyone has those, and they're uniform enough that you can provide an abstraction over them in a relatively uncontroversial way. Message passing, green threads, and async APIs are all diverse enough that any abstraction over them tends to involve trade-offs that we weren't willing to commit to for 1.0. However the way Rust models concurrency makes it relatively easy to design your own concurrency paradigm as a library and have everyone else's code Just Work with yours. Just require the right lifetimes and Send and Sync where appropriate and you're off to the races. Or rather, off to the... not... having... races.","breadcrumbs":"Concurrency and Parallelism","id":"55","title":"Concurrency and Parallelism"},"56":{"body":"Safe Rust guarantees an absence of data races, which are defined as: two or more threads concurrently accessing a location of memory one of them is a write one of them is unsynchronized A data race has Undefined Behavior, and is therefore impossible to perform in Safe Rust. Data races are mostly prevented through rust's ownership system: it's impossible to alias a mutable reference, so it's impossible to perform a data race. Interior mutability makes this more complicated, which is largely why we have the Send and Sync traits (see below). However Rust does not prevent general race conditions. This is pretty fundamentally impossible, and probably honestly undesirable. Your hardware is racy, your OS is racy, the other programs on your computer are racy, and the world this all runs in is racy. Any system that could genuinely claim to prevent all race conditions would be pretty awful to use, if not just incorrect. So it's perfectly \"fine\" for a Safe Rust program to get deadlocked or do something nonsensical with incorrect synchronization. Obviously such a program isn't very good, but Rust can only hold your hand so far. Still, a race condition can't violate memory safety in a Rust program on its own. Only in conjunction with some other unsafe code can a race condition actually violate memory safety. For instance: use std::thread;\nuse std::sync::atomic::{AtomicUsize, Ordering};\nuse std::sync::Arc; let data = vec![1, 2, 3, 4];\n// Arc so that the memory the AtomicUsize is stored in still exists for\n// the other thread to increment, even if we completely finish executing\n// before it. Rust won't compile the program without it, because of the\n// lifetime requirements of thread::spawn!\nlet idx = Arc::new(AtomicUsize::new(0));\nlet other_idx = idx.clone(); // `move` captures other_idx by-value, moving it into this thread\nthread::spawn(move || { // It's ok to mutate idx because this value // is an atomic, so it can't cause a Data Race. other_idx.fetch_add(10, Ordering::SeqCst);\n}); // Index with the value loaded from the atomic. This is safe because we\n// read the atomic memory only once, and then pass a copy of that value\n// to the Vec's indexing implementation. This indexing will be correctly\n// bounds checked, and there's no chance of the value getting changed\n// in the middle. However our program may panic if the thread we spawned\n// managed to increment before this ran. A race condition because correct\n// program execution (panicking is rarely correct) depends on order of\n// thread execution.\nprintln!(\"{}\", data[idx.load(Ordering::SeqCst)]); use std::thread;\nuse std::sync::atomic::{AtomicUsize, Ordering};\nuse std::sync::Arc; let data = vec![1, 2, 3, 4]; let idx = Arc::new(AtomicUsize::new(0));\nlet other_idx = idx.clone(); // `move` captures other_idx by-value, moving it into this thread\nthread::spawn(move || { // It's ok to mutate idx because this value // is an atomic, so it can't cause a Data Race. other_idx.fetch_add(10, Ordering::SeqCst);\n}); if idx.load(Ordering::SeqCst) < data.len() { unsafe { // Incorrectly loading the idx after we did the bounds check. // It could have changed. This is a race condition, *and dangerous* // because we decided to do `get_unchecked`, which is `unsafe`. println!(\"{}\", data.get_unchecked(idx.load(Ordering::SeqCst))); }\n}","breadcrumbs":"Concurrency » Data Races and Race Conditions","id":"56","title":"Data Races and Race Conditions"},"57":{"body":"Not everything obeys inherited mutability, though. Some types allow you to multiply alias a location in memory while mutating it. Unless these types use synchronization to manage this access, they are absolutely not thread safe. Rust captures this through the Send and Sync traits. A type is Send if it is safe to send it to another thread. A type is Sync if it is safe to share between threads (&T is Send). Send and Sync are fundamental to Rust's concurrency story. As such, a substantial amount of special tooling exists to make them work right. First and foremost, they're unsafe traits . This means that they are unsafe to implement, and other unsafe code can assume that they are correctly implemented. Since they're marker traits (they have no associated items like methods), correctly implemented simply means that they have the intrinsic properties an implementor should have. Incorrectly implementing Send or Sync can cause Undefined Behavior. Send and Sync are also automatically derived traits. This means that, unlike every other trait, if a type is composed entirely of Send or Sync types, then it is Send or Sync. Almost all primitives are Send and Sync, and as a consequence pretty much all types you'll ever interact with are Send and Sync. Major exceptions include: raw pointers are neither Send nor Sync (because they have no safety guards). UnsafeCell isn't Sync (and therefore Cell and RefCell aren't). Rc isn't Send or Sync (because the refcount is shared and unsynchronized). Rc and UnsafeCell are very fundamentally not thread-safe: they enable unsynchronized shared mutable state. However raw pointers are, strictly speaking, marked as thread-unsafe as more of a lint . Doing anything useful with a raw pointer requires dereferencing it, which is already unsafe. In that sense, one could argue that it would be \"fine\" for them to be marked as thread safe. However it's important that they aren't thread safe to prevent types that contain them from being automatically marked as thread safe. These types have non-trivial untracked ownership, and it's unlikely that their author was necessarily thinking hard about thread safety. In the case of Rc, we have a nice example of a type that contains a *mut that is definitely not thread safe. Types that aren't automatically derived can simply implement them if desired: struct MyBox(*mut u8); unsafe impl Send for MyBox {}\nunsafe impl Sync for MyBox {} In the incredibly rare case that a type is inappropriately automatically derived to be Send or Sync, then one can also unimplement Send and Sync: #![feature(optin_builtin_traits)] // I have some magic semantics for some synchronization primitive!\nstruct SpecialThreadToken(u8); impl !Send for SpecialThreadToken {}\nimpl !Sync for SpecialThreadToken {} Note that in and of itself it is impossible to incorrectly derive Send and Sync. Only types that are ascribed special meaning by other unsafe code can possible cause trouble by being incorrectly Send or Sync. Most uses of raw pointers should be encapsulated behind a sufficient abstraction that Send and Sync can be derived. For instance all of Rust's standard collections are Send and Sync (when they contain Send and Sync types) in spite of their pervasive use of raw pointers to manage allocations and complex ownership. Similarly, most iterators into these collections are Send and Sync because they largely behave like an & or &mut into the collection. TODO: better explain what can or can't be Send or Sync. Sufficient to appeal only to data races?","breadcrumbs":"Concurrency » Send and Sync","id":"57","title":"Send and Sync"},"58":{"body":"Rust pretty blatantly just inherits C11's memory model for atomics. This is not due to this model being particularly excellent or easy to understand. Indeed, this model is quite complex and known to have several flaws . Rather, it is a pragmatic concession to the fact that everyone is pretty bad at modeling atomics. At very least, we can benefit from existing tooling and research around C. Trying to fully explain the model in this book is fairly hopeless. It's defined in terms of madness-inducing causality graphs that require a full book to properly understand in a practical way. If you want all the nitty-gritty details, you should check out C's specification (Section 7.17) . Still, we'll try to cover the basics and some of the problems Rust developers face. The C11 memory model is fundamentally about trying to bridge the gap between the semantics we want, the optimizations compilers want, and the inconsistent chaos our hardware wants. We would like to just write programs and have them do exactly what we said but, you know, fast. Wouldn't that be great?","breadcrumbs":"Concurrency » Atomics","id":"58","title":"Atomics"},"59":{"body":"Compilers fundamentally want to be able to do all sorts of complicated transformations to reduce data dependencies and eliminate dead code. In particular, they may radically change the actual order of events, or make events never occur! If we write something like x = 1;\ny = 3;\nx = 2; The compiler may conclude that it would be best if your program did x = 2;\ny = 3; This has inverted the order of events and completely eliminated one event. From a single-threaded perspective this is completely unobservable: after all the statements have executed we are in exactly the same state. But if our program is multi-threaded, we may have been relying on x to actually be assigned to 1 before y was assigned. We would like the compiler to be able to make these kinds of optimizations, because they can seriously improve performance. On the other hand, we'd also like to be able to depend on our program doing the thing we said .","breadcrumbs":"Concurrency » Compiler Reordering","id":"59","title":"Compiler Reordering"},"6":{"body":"最初に重要なこととして、すべての型はバイト単位で指定されたアラインメントに従います。 ある型のアラインメントは、値を格納する有効なアドレスを規定します。 アラインメント n の値は、n の倍数のアドレスにのみ格納できます。 つまりアラインメント 2 は、偶数アドレスにのみ格納できることを意味し、 アラインメント 1 はどこにでも格納できることになります。 アラインメントの最小値は 1 で、常に 2 のべき乗になります。 ほとんどのプリミティブ型はそのサイズにアラインメントされますが、 これはプラットフォーム依存の挙動です。 特に x86 では u64 と f64 は 32ビットにアラインされるかもしれません。 型のサイズは、常にそのアラインメントの倍数でなくてはなりません。 こうすることで、サイズの倍数をオフセットすることで、その型の配列のインデックスアクセスになります。 動的にサイズが決まる型 の場合、型のサイズとアラインメントは静的にはわからない場合があることに注意してください。 Rust では次の方法で複合データのメモリレイアウトを制御することができます。 構造体(名前付き直積型) タプル(名前なし直積型) 配列(同じ種類の型の直積型) enum(名前付き直交型。またはタグ付き共用体) enum のすべての要素が関連データを持たない場合、その enum は C-like と呼ばれます。 複合データのアラインメントは、その要素のうち最大のアラインメントと同じです。 そのために、Rust は必要なときにはパディングを挿入して、 すべてのフィールドが適切にアラインされ、 また全体のサイズがアラインメントの倍数になるようにします。 例えば、 struct A { a: u8, b: u32, c: u16,\n} この構造体は、メンバーのプリミティブ型が対応するサイズにアラインされるアーキテクチャでは、 32ビットにアラインされます。そのため全体の構造体のサイズも 32ビットの倍数になります。 このようになるでしょう。 struct A { a: u8, _pad1: [u8; 3], // `b` のアラインメントのため b: u32, c: u16, _pad2: [u8; 2], // 全体のサイズを 4バイトの倍数にするため\n} この構造体には 間接参照はありません 。C と同様に、すべてのデータは構造体の内部に格納されます。 しかし、配列は例外(配列は隙間なく順にパックされます)ですが、Rust ではデータレイアウトは デフォルトでは規定されていません。以下の 2 つの構造体の定義を見てみましょう。 struct A { a: i32, b: u64,\n} struct B { a: i32, b: u64,\n} Rust は A の 2 つのインスタンスが同じようにレイアウトされることを 保証します 。 しかし、A のインスタンスと B のインスタンスとが同じフィールド順や、同じパディングを持つことを 保証しません 。(現実的には同じにならない理由はないのですが) この A, B の例では、レイアウトが保証されないなんて融通が利かないと思うかもしれませんが、 他の機能を考えると、Rust がデータレイアウトを複雑にいじくれるようにするのは好ましいのです。 例えば、次の構造体を見てみましょう。 struct Foo { count: u16, data1: T, data2: U,\n} さて、単体化した Foo と Foo とを考えてみます。 もし Rust が指定された順にフィールドをレイアウトしなくてはならないとすると、 アラインメントの要求を満たすために、パディングしなくてはなりません。 つまりもし Rust がフィールドを並び替えられないとすると、次のような型を生成すると思われます。 struct Foo { count: u16, data1: u16, data2: u32,\n} struct Foo { count: u16, _pad1: u16, data1: u32, data2: u16, _pad2: u16,\n} 後者の例ははっきり言ってスペースの無駄遣いです。 したがって、スペースを最適に使うには、異なる単体化には 異なるフィールド順序 が必要になります。 これは仮定の最適化で、Rust 1.0 ではまた実装されていないことに注意してください。 Enum については、もっと複雑な検討が必要になります。つまり、この enum enum Foo { A(u32), B(u64), C(u8),\n} は、次のようにレイアウトされるでしょう。 struct FooRepr { data: u64, // `tag` によって、u64, u32, u8 のいずれかになります tag: u8, // 0 = A, 1 = B, 2 = C\n} 実際にこれが、データが一般的にどのようにレイアウトされるかの大体の説明となります。 ところが、このような表現が非効率な場合もあります。 わかりやすい例としては、Rust の \"ヌルポインタ最適化\" があります。 これは、ある enum がデータを持たないメンバー(たとえば None)と、(ネストしてるかもしれない)ヌルを取らないメンバー(たとえば &T)から構成される場合、ヌルポインタをデータを持たないメンバーと解釈することができるので、タグが不要になります。 その結果、たとえば size_of::>() == size_of::<&T>() となります。 Rust には、ヌルポインタになりえない型や、ヌルポインタを含まない型がたくさんあります。 例えば Box, Vec, String, &T, &mut T などです。 同様に、ネストした複数の enum が、タグを単一の判別子に押し込めることも考えられます。 タグが取り得る値は、定義により限られているからです。 原理的には、enum はとても複雑なアルゴリズムを使って、ネストした型を特別な制約のもとで表現し、 ビットを隠すことができるでしょう。 このため、enum のレイアウトを規定しないでおくことは、現状では 特に 好ましいのです。","breadcrumbs":"データレイアウト » repr(Rust)","id":"6","title":"repr(Rust)"},"60":{"body":"On the other hand, even if the compiler totally understood what we wanted and respected our wishes, our hardware might instead get us in trouble. Trouble comes from CPUs in the form of memory hierarchies. There is indeed a global shared memory space somewhere in your hardware, but from the perspective of each CPU core it is so very far away and so very slow . Each CPU would rather work with its local cache of the data and only go through all the anguish of talking to shared memory only when it doesn't actually have that memory in cache. After all, that's the whole point of the cache, right? If every read from the cache had to run back to shared memory to double check that it hadn't changed, what would the point be? The end result is that the hardware doesn't guarantee that events that occur in the same order on one thread, occur in the same order on another thread. To guarantee this, we must issue special instructions to the CPU telling it to be a bit less smart. For instance, say we convince the compiler to emit this logic: initial state: x = 0, y = 1 THREAD 1 THREAD2\ny = 3; if x == 1 {\nx = 1; y *= 2; } Ideally this program has 2 possible final states: y = 3: (thread 2 did the check before thread 1 completed) y = 6: (thread 2 did the check after thread 1 completed) However there's a third potential state that the hardware enables: y = 2: (thread 2 saw x = 1, but not y = 3, and then overwrote y = 3) It's worth noting that different kinds of CPU provide different guarantees. It is common to separate hardware into two categories: strongly-ordered and weakly- ordered. Most notably x86/64 provides strong ordering guarantees, while ARM provides weak ordering guarantees. This has two consequences for concurrent programming: Asking for stronger guarantees on strongly-ordered hardware may be cheap or even free because they already provide strong guarantees unconditionally. Weaker guarantees may only yield performance wins on weakly-ordered hardware. Asking for guarantees that are too weak on strongly-ordered hardware is more likely to happen to work, even though your program is strictly incorrect. If possible, concurrent algorithms should be tested on weakly-ordered hardware.","breadcrumbs":"Concurrency » Hardware Reordering","id":"60","title":"Hardware Reordering"},"61":{"body":"The C11 memory model attempts to bridge the gap by allowing us to talk about the causality of our program. Generally, this is by establishing a happens before relationship between parts of the program and the threads that are running them. This gives the hardware and compiler room to optimize the program more aggressively where a strict happens-before relationship isn't established, but forces them to be more careful where one is established. The way we communicate these relationships are through data accesses and atomic accesses . Data accesses are the bread-and-butter of the programming world. They are fundamentally unsynchronized and compilers are free to aggressively optimize them. In particular, data accesses are free to be reordered by the compiler on the assumption that the program is single-threaded. The hardware is also free to propagate the changes made in data accesses to other threads as lazily and inconsistently as it wants. Most critically, data accesses are how data races happen. Data accesses are very friendly to the hardware and compiler, but as we've seen they offer awful semantics to try to write synchronized code with. Actually, that's too weak. It is literally impossible to write correct synchronized code using only data accesses. Atomic accesses are how we tell the hardware and compiler that our program is multi-threaded. Each atomic access can be marked with an ordering that specifies what kind of relationship it establishes with other accesses. In practice, this boils down to telling the compiler and hardware certain things they can't do. For the compiler, this largely revolves around re-ordering of instructions. For the hardware, this largely revolves around how writes are propagated to other threads. The set of orderings Rust exposes are: Sequentially Consistent (SeqCst) Release Acquire Relaxed (Note: We explicitly do not expose the C11 consume ordering) TODO: negative reasoning vs positive reasoning? TODO: \"can't forget to synchronize\"","breadcrumbs":"Concurrency » Data Accesses","id":"61","title":"Data Accesses"},"62":{"body":"Sequentially Consistent is the most powerful of all, implying the restrictions of all other orderings. Intuitively, a sequentially consistent operation cannot be reordered: all accesses on one thread that happen before and after a SeqCst access stay before and after it. A data-race-free program that uses only sequentially consistent atomics and data accesses has the very nice property that there is a single global execution of the program's instructions that all threads agree on. This execution is also particularly nice to reason about: it's just an interleaving of each thread's individual executions. This does not hold if you start using the weaker atomic orderings. The relative developer-friendliness of sequential consistency doesn't come for free. Even on strongly-ordered platforms sequential consistency involves emitting memory fences. In practice, sequential consistency is rarely necessary for program correctness. However sequential consistency is definitely the right choice if you're not confident about the other memory orders. Having your program run a bit slower than it needs to is certainly better than it running incorrectly! It's also mechanically trivial to downgrade atomic operations to have a weaker consistency later on. Just change SeqCst to Relaxed and you're done! Of course, proving that this transformation is correct is a whole other matter.","breadcrumbs":"Concurrency » Sequentially Consistent","id":"62","title":"Sequentially Consistent"},"63":{"body":"Acquire and Release are largely intended to be paired. Their names hint at their use case: they're perfectly suited for acquiring and releasing locks, and ensuring that critical sections don't overlap. Intuitively, an acquire access ensures that every access after it stays after it. However operations that occur before an acquire are free to be reordered to occur after it. Similarly, a release access ensures that every access before it stays before it. However operations that occur after a release are free to be reordered to occur before it. When thread A releases a location in memory and then thread B subsequently acquires the same location in memory, causality is established. Every write that happened before A's release will be observed by B after its release. However no causality is established with any other threads. Similarly, no causality is established if A and B access different locations in memory. Basic use of release-acquire is therefore simple: you acquire a location of memory to begin the critical section, and then release that location to end it. For instance, a simple spinlock might look like: use std::sync::Arc;\nuse std::sync::atomic::{AtomicBool, Ordering};\nuse std::thread; fn main() { let lock = Arc::new(AtomicBool::new(false)); // value answers \"am I locked?\" // ... distribute lock to threads somehow ... // Try to acquire the lock by setting it to true while lock.compare_and_swap(false, true, Ordering::Acquire) { } // broke out of the loop, so we successfully acquired the lock! // ... scary data accesses ... // ok we're done, release the lock lock.store(false, Ordering::Release);\n} On strongly-ordered platforms most accesses have release or acquire semantics, making release and acquire often totally free. This is not the case on weakly-ordered platforms.","breadcrumbs":"Concurrency » Acquire-Release","id":"63","title":"Acquire-Release"},"64":{"body":"Relaxed accesses are the absolute weakest. They can be freely re-ordered and provide no happens-before relationship. Still, relaxed operations are still atomic. That is, they don't count as data accesses and any read-modify-write operations done to them occur atomically. Relaxed operations are appropriate for things that you definitely want to happen, but don't particularly otherwise care about. For instance, incrementing a counter can be safely done by multiple threads using a relaxed fetch_add if you're not using the counter to synchronize any other accesses. There's rarely a benefit in making an operation relaxed on strongly-ordered platforms, since they usually provide release-acquire semantics anyway. However relaxed operations can be cheaper on weakly-ordered platforms.","breadcrumbs":"Concurrency » Relaxed","id":"64","title":"Relaxed"},"65":{"body":"To bring everything together, we're going to write std::Vec from scratch. Because all the best tools for writing unsafe code are unstable, this project will only work on nightly (as of Rust 1.9.0). With the exception of the allocator API, much of the unstable code we'll use is expected to be stabilized in a similar form as it is today. However we will generally try to avoid unstable code where possible. In particular we won't use any intrinsics that could make a code a little bit nicer or efficient because intrinsics are permanently unstable. Although many intrinsics do become stabilized elsewhere (std::ptr and str::mem consist of many intrinsics). Ultimately this means our implementation may not take advantage of all possible optimizations, though it will be by no means naive . We will definitely get into the weeds over nitty-gritty details, even when the problem doesn't really merit it. You wanted advanced. We're gonna go advanced.","breadcrumbs":"Example: Implementing Vec","id":"65","title":"Example: Implementing Vec"},"66":{"body":"First off, we need to come up with the struct layout. A Vec has three parts: a pointer to the allocation, the size of the allocation, and the number of elements that have been initialized. Naively, this means we just want this design: pub struct Vec { ptr: *mut T, cap: usize, len: usize,\n}\n# fn main() {} And indeed this would compile. Unfortunately, it would be incorrect. First, the compiler will give us too strict variance. So a &Vec<&'static str> couldn't be used where an &Vec<&'a str> was expected. More importantly, it will give incorrect ownership information to the drop checker, as it will conservatively assume we don't own any values of type T. See the chapter on ownership and lifetimes for all the details on variance and drop check. As we saw in the ownership chapter, we should use Unique in place of *mut T when we have a raw pointer to an allocation we own. Unique is unstable, so we'd like to not use it if possible, though. As a recap, Unique is a wrapper around a raw pointer that declares that: We are variant over T We may own a value of type T (for drop check) We are Send/Sync if T is Send/Sync We deref to *mut T (so it largely acts like a *mut in our code) Our pointer is never null (so Option> is null-pointer-optimized) We can implement all of the above requirements except for the last one in stable Rust: use std::marker::PhantomData;\nuse std::ops::Deref;\nuse std::mem; struct Unique { ptr: *const T, // *const for variance _marker: PhantomData, // For the drop checker\n} // Deriving Send and Sync is safe because we are the Unique owners\n// of this data. It's like Unique is \"just\" T.\nunsafe impl Send for Unique {}\nunsafe impl Sync for Unique {} impl Unique { pub fn new(ptr: *mut T) -> Self { Unique { ptr: ptr, _marker: PhantomData } }\n} impl Deref for Unique { type Target = *mut T; fn deref(&self) -> &*mut T { // There's no way to cast the *const to a *mut // while also taking a reference. So we just // transmute it since it's all \"just pointers\". unsafe { mem::transmute(&self.ptr) } }\n}\n# fn main() {} Unfortunately the mechanism for stating that your value is non-zero is unstable and unlikely to be stabilized soon. As such we're just going to take the hit and use std's Unique: #![feature(unique)] use std::ptr::{Unique, self}; pub struct Vec { ptr: Unique, cap: usize, len: usize,\n} # fn main() {} If you don't care about the null-pointer optimization, then you can use the stable code. However we will be designing the rest of the code around enabling the optimization. In particular, Unique::new is unsafe to call, because putting null inside of it is Undefined Behavior. Our stable Unique doesn't need new to be unsafe because it doesn't make any interesting guarantees about its contents.","breadcrumbs":"Implementing Vec » Layout","id":"66","title":"Layout"},"67":{"body":"Using Unique throws a wrench in an important feature of Vec (and indeed all of the std collections): an empty Vec doesn't actually allocate at all. So if we can't allocate, but also can't put a null pointer in ptr, what do we do in Vec::new? Well, we just put some other garbage in there! This is perfectly fine because we already have cap == 0 as our sentinel for no allocation. We don't even need to handle it specially in almost any code because we usually need to check if cap > len or len > 0 anyway. The traditional Rust value to put here is 0x01. The standard library actually exposes this as alloc::heap::EMPTY. There are quite a few places where we'll want to use heap::EMPTY because there's no real allocation to talk about but null would make the compiler do bad things. All of the heap API is totally unstable under the heap_api feature, though. We could trivially define heap::EMPTY ourselves, but we'll want the rest of the heap API anyway, so let's just get that dependency over with. So: #![feature(alloc, heap_api)] use std::mem; use alloc::heap::EMPTY; impl Vec { fn new() -> Self { assert!(mem::size_of::() != 0, \"We're not ready to handle ZSTs\"); unsafe { // need to cast EMPTY to the actual ptr type we want, let // inference handle it. Vec { ptr: Unique::new(heap::EMPTY as *mut _), len: 0, cap: 0 } } }\n} I slipped in that assert there because zero-sized types will require some special handling throughout our code, and I want to defer the issue for now. Without this assert, some of our early drafts will do some Very Bad Things. Next we need to figure out what to actually do when we do want space. For that, we'll need to use the rest of the heap APIs. These basically allow us to talk directly to Rust's allocator (jemalloc by default). We'll also need a way to handle out-of-memory (OOM) conditions. The standard library calls the abort intrinsic, which just calls an illegal instruction to crash the whole program. The reason we abort and don't panic is because unwinding can cause allocations to happen, and that seems like a bad thing to do when your allocator just came back with \"hey I don't have any more memory\". Of course, this is a bit silly since most platforms don't actually run out of memory in a conventional way. Your operating system will probably kill the application by another means if you legitimately start using up all the memory. The most likely way we'll trigger OOM is by just asking for ludicrous quantities of memory at once (e.g. half the theoretical address space). As such it's probably fine to panic and nothing bad will happen. Still, we're trying to be like the standard library as much as possible, so we'll just kill the whole program. We said we don't want to use intrinsics, so doing exactly what std does is out. Instead, we'll call std::process::exit with some random number. fn oom() { ::std::process::exit(-9999);\n} Okay, now we can write growing. Roughly, we want to have this logic: if cap == 0: allocate() cap = 1\nelse: reallocate() cap *= 2 But Rust's only supported allocator API is so low level that we'll need to do a fair bit of extra work. We also need to guard against some special conditions that can occur with really large allocations or empty allocations. In particular, ptr::offset will cause us a lot of trouble, because it has the semantics of LLVM's GEP inbounds instruction. If you're fortunate enough to not have dealt with this instruction, here's the basic story with GEP: alias analysis, alias analysis, alias analysis. It's super important to an optimizing compiler to be able to reason about data dependencies and aliasing. As a simple example, consider the following fragment of code: # let x = &mut 0;\n# let y = &mut 0;\n*x *= 7;\n*y *= 3; If the compiler can prove that x and y point to different locations in memory, the two operations can in theory be executed in parallel (by e.g. loading them into different registers and working on them independently). However the compiler can't do this in general because if x and y point to the same location in memory, the operations need to be done to the same value, and they can't just be merged afterwards. When you use GEP inbounds, you are specifically telling LLVM that the offsets you're about to do are within the bounds of a single \"allocated\" entity. The ultimate payoff being that LLVM can assume that if two pointers are known to point to two disjoint objects, all the offsets of those pointers are also known to not alias (because you won't just end up in some random place in memory). LLVM is heavily optimized to work with GEP offsets, and inbounds offsets are the best of all, so it's important that we use them as much as possible. So that's what GEP's about, how can it cause us trouble? The first problem is that we index into arrays with unsigned integers, but GEP (and as a consequence ptr::offset) takes a signed integer. This means that half of the seemingly valid indices into an array will overflow GEP and actually go in the wrong direction! As such we must limit all allocations to isize::MAX elements. This actually means we only need to worry about byte-sized objects, because e.g. > isize::MAX u16s will truly exhaust all of the system's memory. However in order to avoid subtle corner cases where someone reinterprets some array of < isize::MAX objects as bytes, std limits all allocations to isize::MAX bytes. On all 64-bit targets that Rust currently supports we're artificially limited to significantly less than all 64 bits of the address space (modern x64 platforms only expose 48-bit addressing), so we can rely on just running out of memory first. However on 32-bit targets, particularly those with extensions to use more of the address space (PAE x86 or x32), it's theoretically possible to successfully allocate more than isize::MAX bytes of memory. However since this is a tutorial, we're not going to be particularly optimal here, and just unconditionally check, rather than use clever platform-specific cfgs. The other corner-case we need to worry about is empty allocations. There will be two kinds of empty allocations we need to worry about: cap = 0 for all T, and cap > 0 for zero-sized types. These cases are tricky because they come down to what LLVM means by \"allocated\". LLVM's notion of an allocation is significantly more abstract than how we usually use it. Because LLVM needs to work with different languages' semantics and custom allocators, it can't really intimately understand allocation. Instead, the main idea behind allocation is \"doesn't overlap with other stuff\". That is, heap allocations, stack allocations, and globals don't randomly overlap. Yep, it's about alias analysis. As such, Rust can technically play a bit fast and loose with the notion of an allocation as long as it's consistent . Getting back to the empty allocation case, there are a couple of places where we want to offset by 0 as a consequence of generic code. The question is then: is it consistent to do so? For zero-sized types, we have concluded that it is indeed consistent to do a GEP inbounds offset by an arbitrary number of elements. This is a runtime no-op because every element takes up no space, and it's fine to pretend that there's infinite zero-sized types allocated at 0x01. No allocator will ever allocate that address, because they won't allocate 0x00 and they generally allocate to some minimal alignment higher than a byte. Also generally the whole first page of memory is protected from being allocated anyway (a whole 4k, on many platforms). However what about for positive-sized types? That one's a bit trickier. In principle, you can argue that offsetting by 0 gives LLVM no information: either there's an element before the address or after it, but it can't know which. However we've chosen to conservatively assume that it may do bad things. As such we will guard against this case explicitly. Phew Ok with all the nonsense out of the way, let's actually allocate some memory: fn grow(&mut self) { // this is all pretty delicate, so let's say it's all unsafe unsafe { // current API requires us to specify size and alignment manually. let align = mem::align_of::(); let elem_size = mem::size_of::(); let (new_cap, ptr) = if self.cap == 0 { let ptr = heap::allocate(elem_size, align); (1, ptr) } else { // as an invariant, we can assume that `self.cap < isize::MAX`, // so this doesn't need to be checked. let new_cap = self.cap * 2; // Similarly this can't overflow due to previously allocating this let old_num_bytes = self.cap * elem_size; // check that the new allocation doesn't exceed `isize::MAX` at all // regardless of the actual size of the capacity. This combines the // `new_cap <= isize::MAX` and `new_num_bytes <= usize::MAX` checks // we need to make. We lose the ability to allocate e.g. 2/3rds of // the address space with a single Vec of i16's on 32-bit though. // Alas, poor Yorick -- I knew him, Horatio. assert!(old_num_bytes <= (::std::isize::MAX as usize) / 2, \"capacity overflow\"); let new_num_bytes = old_num_bytes * 2; let ptr = heap::reallocate(*self.ptr as *mut _, old_num_bytes, new_num_bytes, align); (new_cap, ptr) }; // If allocate or reallocate fail, we'll get `null` back if ptr.is_null() { oom(); } self.ptr = Unique::new(ptr as *mut _); self.cap = new_cap; }\n} Nothing particularly tricky here. Just computing sizes and alignments and doing some careful multiplication checks.","breadcrumbs":"Implementing Vec » Allocating Memory","id":"67","title":"Allocating Memory"},"68":{"body":"Alright. We can initialize. We can allocate. Let's actually implement some functionality! Let's start with push. All it needs to do is check if we're full to grow, unconditionally write to the next index, and then increment our length. To do the write we have to be careful not to evaluate the memory we want to write to. At worst, it's truly uninitialized memory from the allocator. At best it's the bits of some old value we popped off. Either way, we can't just index to the memory and dereference it, because that will evaluate the memory as a valid instance of T. Worse, foo[idx] = x will try to call drop on the old value of foo[idx]! The correct way to do this is with ptr::write, which just blindly overwrites the target address with the bits of the value we provide. No evaluation involved. For push, if the old len (before push was called) is 0, then we want to write to the 0th index. So we should offset by the old len. pub fn push(&mut self, elem: T) { if self.len == self.cap { self.grow(); } unsafe { ptr::write(self.ptr.offset(self.len as isize), elem); } // Can't fail, we'll OOM first. self.len += 1;\n} Easy! How about pop? Although this time the index we want to access is initialized, Rust won't just let us dereference the location of memory to move the value out, because that would leave the memory uninitialized! For this we need ptr::read, which just copies out the bits from the target address and interprets it as a value of type T. This will leave the memory at this address logically uninitialized, even though there is in fact a perfectly good instance of T there. For pop, if the old len is 1, we want to read out of the 0th index. So we should offset by the new len. pub fn pop(&mut self) -> Option { if self.len == 0 { None } else { self.len -= 1; unsafe { Some(ptr::read(self.ptr.offset(self.len as isize))) } }\n}","breadcrumbs":"Implementing Vec » Push and Pop","id":"68","title":"Push and Pop"},"69":{"body":"Next we should implement Drop so that we don't massively leak tons of resources. The easiest way is to just call pop until it yields None, and then deallocate our buffer. Note that calling pop is unneeded if T: !Drop. In theory we can ask Rust if T needs_drop and omit the calls to pop. However in practice LLVM is really good at removing simple side-effect free code like this, so I wouldn't bother unless you notice it's not being stripped (in this case it is). We must not call heap::deallocate when self.cap == 0, as in this case we haven't actually allocated any memory. impl Drop for Vec { fn drop(&mut self) { if self.cap != 0 { while let Some(_) = self.pop() { } let align = mem::align_of::(); let elem_size = mem::size_of::(); let num_bytes = elem_size * self.cap; unsafe { heap::deallocate(*self.ptr as *mut _, num_bytes, align); } } }\n}","breadcrumbs":"Implementing Vec » Deallocating","id":"69","title":"Deallocating"},"7":{"body":"私たちは、型は 0 以上の固定サイズを持つと通常考えます。でも常にそうであるとは限りません。","breadcrumbs":"データレイアウト » 奇妙なサイズの型","id":"7","title":"奇妙なサイズの型"},"70":{"body":"Alright! We've got a decent minimal stack implemented. We can push, we can pop, and we can clean up after ourselves. However there's a whole mess of functionality we'd reasonably want. In particular, we have a proper array, but none of the slice functionality. That's actually pretty easy to solve: we can implement Deref. This will magically make our Vec coerce to, and behave like, a slice in all sorts of conditions. All we need is slice::from_raw_parts. It will correctly handle empty slices for us. Later once we set up zero-sized type support it will also Just Work for those too. use std::ops::Deref; impl Deref for Vec { type Target = [T]; fn deref(&self) -> &[T] { unsafe { ::std::slice::from_raw_parts(*self.ptr, self.len) } }\n} And let's do DerefMut too: use std::ops::DerefMut; impl DerefMut for Vec { fn deref_mut(&mut self) -> &mut [T] { unsafe { ::std::slice::from_raw_parts_mut(*self.ptr, self.len) } }\n} Now we have len, first, last, indexing, slicing, sorting, iter, iter_mut, and all other sorts of bells and whistles provided by slice. Sweet!","breadcrumbs":"Implementing Vec » Deref","id":"70","title":"Deref"},"71":{"body":"Something not provided by slice is insert and remove, so let's do those next. Insert needs to shift all the elements at the target index to the right by one. To do this we need to use ptr::copy, which is our version of C's memmove. This copies some chunk of memory from one location to another, correctly handling the case where the source and destination overlap (which will definitely happen here). If we insert at index i, we want to shift the [i .. len] to [i+1 .. len+1] using the old len. pub fn insert(&mut self, index: usize, elem: T) { // Note: `<=` because it's valid to insert after everything // which would be equivalent to push. assert!(index <= self.len, \"index out of bounds\"); if self.cap == self.len { self.grow(); } unsafe { if index < self.len { // ptr::copy(src, dest, len): \"copy from source to dest len elems\" ptr::copy(self.ptr.offset(index as isize), self.ptr.offset(index as isize + 1), self.len - index); } ptr::write(self.ptr.offset(index as isize), elem); self.len += 1; }\n} Remove behaves in the opposite manner. We need to shift all the elements from [i+1 .. len + 1] to [i .. len] using the new len. pub fn remove(&mut self, index: usize) -> T { // Note: `<` because it's *not* valid to remove after everything assert!(index < self.len, \"index out of bounds\"); unsafe { self.len -= 1; let result = ptr::read(self.ptr.offset(index as isize)); ptr::copy(self.ptr.offset(index as isize + 1), self.ptr.offset(index as isize), self.len - index); result }\n}","breadcrumbs":"Implementing Vec » Insert and Remove","id":"71","title":"Insert and Remove"},"72":{"body":"Let's move on to writing iterators. iter and iter_mut have already been written for us thanks to The Magic of Deref. However there's two interesting iterators that Vec provides that slices can't: into_iter and drain. IntoIter consumes the Vec by-value, and can consequently yield its elements by-value. In order to enable this, IntoIter needs to take control of Vec's allocation. IntoIter needs to be DoubleEnded as well, to enable reading from both ends. Reading from the back could just be implemented as calling pop, but reading from the front is harder. We could call remove(0) but that would be insanely expensive. Instead we're going to just use ptr::read to copy values out of either end of the Vec without mutating the buffer at all. To do this we're going to use a very common C idiom for array iteration. We'll make two pointers; one that points to the start of the array, and one that points to one-element past the end. When we want an element from one end, we'll read out the value pointed to at that end and move the pointer over by one. When the two pointers are equal, we know we're done. Note that the order of read and offset are reversed for next and next_back For next_back the pointer is always after the element it wants to read next, while for next the pointer is always at the element it wants to read next. To see why this is, consider the case where every element but one has been yielded. The array looks like this: S E\n[X, X, X, O, X, X, X] If E pointed directly at the element it wanted to yield next, it would be indistinguishable from the case where there are no more elements to yield. Although we don't actually care about it during iteration, we also need to hold onto the Vec's allocation information in order to free it once IntoIter is dropped. So we're going to use the following struct: struct IntoIter { buf: Unique, cap: usize, start: *const T, end: *const T,\n} And this is what we end up with for initialization: impl Vec { fn into_iter(self) -> IntoIter { // Can't destructure Vec since it's Drop let ptr = self.ptr; let cap = self.cap; let len = self.len; // Make sure not to drop Vec since that will free the buffer mem::forget(self); unsafe { IntoIter { buf: ptr, cap: cap, start: *ptr, end: if cap == 0 { // can't offset off this pointer, it's not allocated! *ptr } else { ptr.offset(len as isize) } } } }\n} Here's iterating forward: impl Iterator for IntoIter { type Item = T; fn next(&mut self) -> Option { if self.start == self.end { None } else { unsafe { let result = ptr::read(self.start); self.start = self.start.offset(1); Some(result) } } } fn size_hint(&self) -> (usize, Option) { let len = (self.end as usize - self.start as usize) / mem::size_of::(); (len, Some(len)) }\n} And here's iterating backwards. impl DoubleEndedIterator for IntoIter { fn next_back(&mut self) -> Option { if self.start == self.end { None } else { unsafe { self.end = self.end.offset(-1); Some(ptr::read(self.end)) } } }\n} Because IntoIter takes ownership of its allocation, it needs to implement Drop to free it. However it also wants to implement Drop to drop any elements it contains that weren't yielded. impl Drop for IntoIter { fn drop(&mut self) { if self.cap != 0 { // drop any remaining elements for _ in &mut *self {} let align = mem::align_of::(); let elem_size = mem::size_of::(); let num_bytes = elem_size * self.cap; unsafe { heap::deallocate(*self.buf as *mut _, num_bytes, align); } } }\n}","breadcrumbs":"Implementing Vec » IntoIter","id":"72","title":"IntoIter"},"73":{"body":"We've actually reached an interesting situation here: we've duplicated the logic for specifying a buffer and freeing its memory in Vec and IntoIter. Now that we've implemented it and identified actual logic duplication, this is a good time to perform some logic compression. We're going to abstract out the (ptr, cap) pair and give them the logic for allocating, growing, and freeing: struct RawVec { ptr: Unique, cap: usize,\n} impl RawVec { fn new() -> Self { assert!(mem::size_of::() != 0, \"TODO: implement ZST support\"); unsafe { RawVec { ptr: Unique::new(heap::EMPTY as *mut T), cap: 0 } } } // unchanged from Vec fn grow(&mut self) { unsafe { let align = mem::align_of::(); let elem_size = mem::size_of::(); let (new_cap, ptr) = if self.cap == 0 { let ptr = heap::allocate(elem_size, align); (1, ptr) } else { let new_cap = 2 * self.cap; let ptr = heap::reallocate(*self.ptr as *mut _, self.cap * elem_size, new_cap * elem_size, align); (new_cap, ptr) }; // If allocate or reallocate fail, we'll get `null` back if ptr.is_null() { oom() } self.ptr = Unique::new(ptr as *mut _); self.cap = new_cap; } }\n} impl Drop for RawVec { fn drop(&mut self) { if self.cap != 0 { let align = mem::align_of::(); let elem_size = mem::size_of::(); let num_bytes = elem_size * self.cap; unsafe { heap::deallocate(*self.ptr as *mut _, num_bytes, align); } } }\n} And change Vec as follows: pub struct Vec { buf: RawVec, len: usize,\n} impl Vec { fn ptr(&self) -> *mut T { *self.buf.ptr } fn cap(&self) -> usize { self.buf.cap } pub fn new() -> Self { Vec { buf: RawVec::new(), len: 0 } } // push/pop/insert/remove largely unchanged: // * `self.ptr -> self.ptr()` // * `self.cap -> self.cap()` // * `self.grow -> self.buf.grow()`\n} impl Drop for Vec { fn drop(&mut self) { while let Some(_) = self.pop() {} // deallocation is handled by RawVec }\n} And finally we can really simplify IntoIter: struct IntoIter