diff --git a/2_translated/font/trialFont10.NFTR b/2_translated/font/trialFont10.NFTR
new file mode 100644
index 0000000..22b5b55
Binary files /dev/null and b/2_translated/font/trialFont10.NFTR differ
diff --git a/2_translated/font/trialFont12.NFTR b/2_translated/font/trialFont12.NFTR
new file mode 100644
index 0000000..41536bf
Binary files /dev/null and b/2_translated/font/trialFont12.NFTR differ
diff --git a/2_translated/story/FSHI00.xml b/2_translated/story/FSHI00.xml
index 7be3a42..59f3150 100644
--- a/2_translated/story/FSHI00.xml
+++ b/2_translated/story/FSHI00.xml
@@ -75,7 +75,8 @@ belly is really the best!
グミ焼酎をいれて煮こむのが
秘訣なんだぞ
See?
-The secret is to cook it in some liquer gel.
+The secret is to cook it in some liquer
+gel.
Maybe "That's right" for the first phrase
2
2
@@ -89,9 +90,9 @@ The secret is to cook it in some liquer gel.
ゼクスさん……となり街の
兄の娘が、もう何日も口もきかず
部屋からも出てこないらしい
- Mr. Zeks... My brother says his daughter
-hasn't spoken or come out of her room for
-days now. They're in the next city over.
+ Mr. Zex... My niece hasn't spoken or
+come out of her room for days
+now. She's in the next city over.
3
3
@@ -105,9 +106,9 @@ days now. They're in the next city over.
兄は、例の『デスピル病』じゃ
ないかって心配してるんだ……
ソーマで診てはもらえんかねぇ?
- He's worried that she might have contracted
-that disease called "Despir." Could you
-check up on her with your Soma?
+ My brother's worried that she caught
+that disease called "Despir." Could
+you check up on her with your Soma?
3
3
@@ -121,9 +122,9 @@ check up on her with your Soma?
デスピル病! 最近、流行って
るんだってね。スピリアが
暴走する原因不明の奇病なんだろ?
- Despir! It seems to be rampant these days.
-It's that strange disease which causes people's
-Spiria to go out of control, right?
+ Despir! It seems to be rampant nowadays.
+It's that strange disease which causes
+people's Spiria to go out of control, right?
1
4
@@ -137,7 +138,8 @@ Spiria to go out of control, right?
その上、普通の医者や薬では
どうにもできないときている
On top of that, ordinary doctors and
-medicines can't do anything to treat it.
+medicines can't do anything to treat
+it.
3
5
@@ -151,15 +153,15 @@ medicines can't do anything to treat it.
う〜ん、『スピリア』ってのは
難しく言うと精神と意思を生み出す
『生命の根源』だからねぇ
- Let's see, to put it simply, "Spiria" is the
-"root of life" that gives birth to our spirit
-and will.
+ Let's see, to put it simply, "Spiria" is
+the "root of life" that gives birth to
+our spirit and will.
1
6
1
1
- Problematic
+ Editing
59884
@@ -167,15 +169,15 @@ and will.
それを癒せるのは、人の
スピリアの中に『リンク』できる
神秘の武具『ソーマ』だけさ!
- Only Somas, which are a type of magical
-weapon, can "link" with a person's Spiria
-and heal them.
+ Only Somas, which are a type of
+magical weapon, can "link" with a
+person's Spiria and heal them.
1
6
2
1
- Problematic
+ Editing
60092
@@ -184,8 +186,8 @@ and heal them.
……スマンが、ワシはソーマを
気軽に使う訳にはいかんのだ
Stay quiet you fool!
-...Look, I'm sorry, but I can't just go using
-my Soma willy-nilly.
+...Look, I'm sorry, but I can't just go
+using my Soma willy-nilly.
2
7
@@ -200,9 +202,9 @@ my Soma willy-nilly.
たら助けろ。美人が困ってたら
絶対助けろ』って言ってるクセに
Why not? You always say to "help out
-someone in need." And "if it's a beautiful
-lady, then definitely help her out."
-
+those in need, especially if it's a
+gorgeous lady."
+ May need to rewrite this phrase since Shing uses it again when meeting Ines.
1
8
1
@@ -243,9 +245,9 @@ lady, then definitely help her out."
この前は、村の広場で
突然暴れだした奴にリンクして
すぐにおとなしくさせたじゃないか
- But the other day, when that guy started
-acting up in the village square, you linked
-with him and got him to calm down.
+ But the other day, when that guy was
+raging in the village square, you
+linked and calmed him down.
1
11
@@ -289,7 +291,7 @@ too up until then, Gramps.
憑いておったからな……
I hadn't intended to reveal it, but the
man who was acting up had been
-possessed by a "Xerom...."
+possessed by a "Xerom..."
2
12
@@ -303,7 +305,7 @@ possessed by a "Xerom...."
まさか、『彼女』の身に何かが?
ソーマリンクの感触では、かなり
力が衰えているようだったが……
- Could it be that something happened to "her?"
+ Could something have happened to "her?"
From the Soma Link, it seems that her
strength has diminished greatly...
@@ -319,9 +321,9 @@ strength has diminished greatly...
……無理は言いたくないが
教会もないこんな辺境では
ゼクスさんを頼るしかないんだよ
- ...I don't want to impose on you, but in
-such a remote place that has no church, we
-have no choice but to rely on you Mr. Zeks.
+ ...I hate to impose on you, but in such
+a remote place that has no church,
+we only have you to rely on Mr. Zex.
3
13
@@ -689,9 +691,9 @@ hidden under the nukazuke!
そうさ!
思念術を操り人間のスピリアに
リンクできる神秘の武具なんだ!
- Yup! It's a magical weapon that can manipulate
-the power of Will and link to human
-Spirias!
+ Yup! It's a magical weapon that can
+manipulate the power of Will and
+link to human Spirias!
1
37
diff --git a/2_translated/story/FSHT00P.xml b/2_translated/story/FSHT00P.xml
index d8da2da..202d037 100644
--- a/2_translated/story/FSHT00P.xml
+++ b/2_translated/story/FSHT00P.xml
@@ -20,10 +20,10 @@
ゼクス
-
+ Zex
3
- To Do
+ Editing
@@ -52,26 +52,26 @@
インカローズ
-
+ Incarose
7
- To Do
+ Editing
おばさん
-
+ Old Lady
8
- To Do
+ Editing
リチア
-
+ Lithia
9
- To Do
+ Editing
@@ -84,18 +84,18 @@
砂浜の少女
-
+ Beach Girl
11
- To Do
+ Editing
近所のおじさん
-
+ Old Man
12
- To Do
+ Editing
@@ -348,9 +348,9 @@ Someone was shouting to at me to
ま、今は夢なんかより『ソーマ』
だ! ジィちゃんから一本とって
『ソーマ』を譲ってもらうんだ!!
- Oh well, who cares about some dream, it's
-a Soma I want! I'll land a hit on you and
-get you to give me your Soma!
+ Oh well, who cares about some dream,
+it's a Soma I want! I'll land a hit on you
+and get you to give me your Soma!
4
16
@@ -412,9 +412,9 @@ I managed to win against you then...
ばかもん! ワシは長寿
世界一の座を狙っとるんだ!!
あと十年や二十年くらい……
- You fool! I'll be taking the title for the
-longest living person in the world!
-I've still got a good 10 or 20 years left...
+ You fool! I'll be taking the title as
+the oldest person alive! I've still
+got a good 10 or 20 years...
3
19
@@ -444,7 +444,8 @@ retiring anytime soon.
いいか、<Shing>。
真に鍛えるべきは『ここ』だ
Listen up, <Shing>.
-What you really need to work on is "here!"
+What you really need to work on is
+"here!"
3
20
@@ -471,7 +472,7 @@ What you really need to work on is "here!"
人の精神と意思を司る
生命の根源……『スピリア』だ
Wrong. I'm talking about the root
-of life... our "Spiria" that controls
+of life...our "Spiria" that controls
the spirit and will of man!
Need to check "thoughts" or "will"
3
@@ -501,9 +502,9 @@ thing in this world.
激しい感情——
怒りや憎しみ、恐怖……また時には
愛や夢ゆえに乱れ、壊れてしまう
- Strong emotions - like anger, hatred,
-fear, and even love or hope can
-disrupt and break it.
+ Strong emotions - like anger,
+hatred, fear, and even love or
+hope can disrupt and break it.
3
22
@@ -518,8 +519,8 @@ disrupt and break it.
だからこそソーマ使いは誰よりも
己のスピリアを鍛えねばならん
A Soma has the power to heal it.
-That's why, more than anyone,
-a Somatic must train his Spiria
+That's why, more any anyone,
+a Somatic must train his Spiria.
3
22
@@ -533,9 +534,9 @@ a Somatic must train his Spiria
<Shing>よ、一時の感情に
流されない、本当に強い
スピリアを見極め、育てるんだ
- <Shing>, don't let fleeting emotions get the
-better of you, nurture a Spiria that is
-discerning and firm.
+ <Shing>, don't let fleeting emotions
+get the better of you, nurture a
+Spiria that is discerning and firm.
3
22
@@ -578,8 +579,8 @@ match for a hungry belly.
………………遅れて家に
入った方が後片付けだぞッ!
But all right, let's eat.
-...Last one to get home has to clean up
-after!
+...Last one to get home has to
+clean up after!
3
25
@@ -607,8 +608,8 @@ That so unfair, Gramps!
なんだ!? 十六にもなって外の世界
を知らないなんてオレだけだよ
Damn it, why can't I leave the village!?
-I'm the only 16 year old who doesn't know
-anything about the outside world.
+I'm the only 16 year old who doesn't
+know anything about the outside world.
4
27
@@ -621,9 +622,9 @@ anything about the outside world.
……けど、約束を破ったら
『スピリアが弱い』って
ジィちゃんに笑われるしな……
- ...But if I break my promise now, Gramps
-is gonna totally mock me again and say
-that my "Spiria is weak..."
+ ...But if I break my promise now,
+Gramps is gonna totally mock me
+again and say that my "Spiria is weak..."
4
28
@@ -651,15 +652,15 @@ my Spiria-"
『大いなる海の眺め
だけなのであった』
〒〒〒〒〒BY<Shing>
- "is the only thing I have: a singular view
-of the great Sea."
+ "is the only thing I have: a singular
+view of the great Sea."
By <Shing>
4
29
2
1
- Problematic
+ Editing
62248
diff --git a/2_translated/story/MOUD03.xml b/2_translated/story/MOUD03.xml
index b37869a..94062ff 100644
--- a/2_translated/story/MOUD03.xml
+++ b/2_translated/story/MOUD03.xml
@@ -36,18 +36,18 @@
ペリドット
-
+ Peridot
5
- To Do
+ Editing
青年画家
-
+ Young Painter
6
- To Do
+ Editing
@@ -64,25 +64,25 @@
71528
S031401
いた!
-
+ There they are!
1
1
1
1
- To Do
+ Editing
71656
S031402
あの子……なんで逃げないのさ?
-
+ Why doesn't she just run away?
2
2
1
1
- To Do
+ Editing
71784
@@ -90,26 +90,30 @@
無理言うな。<Kohaku>は
高所恐怖症で、今はその
『恐怖』を抑える感情もないんだ
-
+ She can't. <Kohaku> was already afraid
+of heights, and right now she lacks the
+emotions to rein in her fear.
+
3
3
1
1
- To Do
+ Editing
72180
S031404
うぅ、高いの……やぁ……
助けて……お兄ちゃん……
-
+ Ugh... I hate heights...
+<Hisui>... save me...
4
4
1
1
- To Do
+ Editing
72348
@@ -117,50 +121,53 @@
あ〜あ、メソメソメソメソ……
うざいっての! あたし
そーゆーの大っ嫌いなんだよ!!
-
+ Augh, boohoo.
+Shut up already! I hate watching
+you cry!
5
5
1
1
- To Do
+ Editing
73704
S031406
あいつ!
-
+ How dare she!
1
6
1
1
- To Do
+ Editing
74112
S031407
ちょ……待てって!
-
+ Hey... Stop!
3
7
1
1
- To Do
+ Editing
74804
S031408
うう……
痛い……よぉ……
-
+ Ugh...
+Ow...
4
8
1
1
- To Do
+ Editing
75048
@@ -168,13 +175,14 @@
あんたねぇ……
誘拐したあたしが
言うのもなんだけどさ
-
+ It's you guys...
+Well, I'm the kidnapper.
5
9
1
1
- To Do
+ Problematic
75048
@@ -182,62 +190,66 @@
泣いたって状況は変わらないん
だよ。女だからこそ涙をこらえて
強く生きなきゃダメなんだ!
-
+ Crying won't change your situation.
+Since you're a woman, you have
+to hold back your tears and live strongly!
5
9
2
1
- To Do
+ Problematic
75048
S031411
ほら、涙をふいてしゃんとしな!
-
+ Here, wipe those tears of yours
+and hold your head up high!
5
9
3
1
- To Do
+ Editing
77256
S031412
<Kohaku>、早く逃げて!!
-
+ <Kohaku>, hurry and get out of there!
1
10
1
1
- To Do
+ Editing
77632
S031413
あ……ああ……
-
+ Ah.. Ahhh...
4
11
1
1
- To Do
+ Editing
77760
S031414
だから、逃げられないって
言ってんだろ、ドアホ!!
-
+ I already told you that
+she can't run, you idiot!
3
12
1
1
- To Do
+ Editing
77888
@@ -245,25 +257,27 @@
ちょっとぉ……脅迫されてる
自覚あんの? 不意打ちの挙句
人数まで増えてるしぃ〜!?
-
+ Hey... do you even realize I'm
+blackmailing you? Why are there
+even more people with you!?
5
13
1
1
- To Do
+ Editing
79288
S031416
うあッ!
-
+ Augh!
1
14
1
1
- To Do
+ Editing
81084
@@ -271,25 +285,27 @@
は〜い、そこまで!
おとなしくスピルーンを渡さないと
お人形さんに傷がつくよ
-
+ All right, that's enough of that!
+Just hand over the Spiria Core shard
+and I won't hurt your little doll.
5
15
1
1
- To Do
+ Editing
81668
S031420
くっ……
-
+ Urgh...
1
16
1
1
- To Do
+ Editing
82944
@@ -310,88 +326,93 @@
S031422
ねぇ、そのスピルーンって
あの子のなんだよね?
-
+ Hey, this Spiria Core shard is that
+girl's, right?
2
18
1
1
- To Do
+ Editing
84848
S031423
……なら、ここは
そのパワーを使うしかないよ
-
+ ...Then it looks like we'll have
+to make use of its power.
2
19
1
1
- To Do
+ Editing
85484
S031424
わぁ、つまづいた!
-
+ Aaah! Oh no, I fell!
2
20
1
1
- To Do
+ Editing
87576
S031425
あっ…………!!
-
+ Ah...!
4
21
1
1
- To Do
+ Editing
87704
S031426
ちょっとぉ!
今、何したッ!?
-
+ Hey!
+What'd you just do!?
5
22
1
1
- To Do
+ Editing
87888
S031427
<Kohaku>、安心して。
その人は、ひどい事しないから
-
+ <Kohaku>, don't worry.
+That woman won't do anything
+bad to you.
2
23
1
1
- To Do
+ Editing
88016
S031428
…………うそ
-
+ ...You're lying.
4
24
1
1
- To Do
+ Editing
88144
@@ -399,13 +420,15 @@
本当だよ!
その人の側で
おとなしくしてれば、安全なんだ!!
-
+ It's true!
+If you just listen to what she
+says, you'll be okay!
2
25
1
1
- To Do
+ Editing
88272
@@ -413,64 +436,70 @@
うそ……
うそッ!
うそッッ!!
-
+ No, you're lying...
+You're lying!
+You're lying!!
4
26
1
1
- To Do
+ Editing
88272
S031431
うそだぁぁぁーーーーーッッ!!!
-
+ Liaaaaaaarrrrr!!!
4
26
2
1
- To Do
+ Editing
89528
S031432
な……ッ??
こいつもソーマを!?
-
+ Wha...!?
+She can use a Soma too!?
5
27
1
1
- To Do
+ Editing
90484
S031433
形勢逆転だな。
逃げんなら、追わないぜ?
-
+ Looks like the tables have turned.
+If you run now, we won't chase
+after you.
3
28
1
1
- To Do
+ Editing
90692
S031434
あーあ、しょーがないかぁ。
超めんどいけど……
-
+ Aww well, guess it can't be helped.
+It may be a bit annoying, but...
5
29
1
1
- To Do
+ Editing
90692
@@ -478,39 +507,44 @@
あんたらをボコって
あの子ごと教会に
連れ帰る事にするよ
-
+ I'll beat you to a pulp and drag
+you all and that girl with me
+to the church.
5
29
2
1
- To Do
+ Editing
91348
崩落があったって
聞いたけど、相変わらず
グリム山は美しいなぁ!
-
+ I heard there was a rockslide, but
+Mount Grimm is still as beautiful
+as ever!
6
30
1
1
- To Do
+ Editing
91348
だが、問題は
この素晴らしい景色を
どう描くかだ……
-
+ However, the question is how to
+paint this amazing view...
6
30
2
1
- To Do
+ Editing
@@ -1289,13 +1323,14 @@
94604
う〜ん、この素晴らしい景色を
どう描こう……
-
+ Hmmm... how to paint this
+amazing view...
7
101
1
1
- To Do
+ Editing
94856
diff --git a/2_translated/story/MOUD03P.xml b/2_translated/story/MOUD03P.xml
index d8ae605..f84d0c5 100644
--- a/2_translated/story/MOUD03P.xml
+++ b/2_translated/story/MOUD03P.xml
@@ -4,10 +4,10 @@
ペリドット
-
+ Peridot
1
- To Do
+ Editing
@@ -44,10 +44,10 @@
青年画家
-
+ Young Painter
6
- To Do
+ Editing
@@ -66,13 +66,15 @@
や、やばい……こんな奴らに
負けたって隊長に知れたら
出世どころか降格が大決定……
-
+ Oh crap... if the captain finds out
+I lost to these jokers, I'll be
+seriously demoted...
1
1
1
1
- To Do
+ Editing
70432
@@ -80,87 +82,92 @@
……よっし!
今回の件はなかった事にして
やるから感謝しなっ! じゃね!!
-
+ Okay! Let's just pretend this little
+excursion never happened. Feel
+free to thank me! Later!!
+
1
2
1
1
- To Do
+ Editing
71244
S031503
なんなんだ、あいつ?
-
+ What the heck is her problem?
2
3
1
1
- To Do
+ Editing
71372
S031504
まったく、あんな性格悪い奴
見た事ないよねっ!
-
+ Jeez, I've never seen anyone with
+such a bad attitude before!
3
4
1
1
- To Do
+ Editing
71616
S031505
…………そーか?
-
+ ...Oh really?
2
5
1
1
- To Do
+ Editing
71820
S031506
<Kohaku>、もう大丈夫だよ
-
+ <Kohaku>, everything's okay now.
4
6
1
1
- To Do
+ Editing
73492
S031507
きゃぁぁぁーーーッ!!!
-
+ Ahhhhh!!!
5
7
1
1
- To Do
+ Editing
73836
S031508
やべぇッ!
今の戦闘が地盤に影響したんだ!
-
+ This is bad! Our fight just now
+must've affected the ground!
2
8
1
1
- To Do
+ Editing
74384
@@ -168,26 +175,28 @@
待ったぁっ!
<Shing>のいるトコも
ヤバイよっ!!
-
+ Wait! The ground where <Shing>'s
+standing is also unstable!!
3
9
1
1
- To Do
+ Editing
74972
S031510
ああぁ……ッ!
大丈夫なんて、うそ……
-
+ Ahhh...!
+You're lying, it's not okay...
5
10
1
1
- To Do
+ Editing
75100
@@ -195,13 +204,14 @@
<Kohaku>!
オレが受けとめるから
こっちに飛んで!!
-
+ <Kohaku>!
+Jump over here, I'll catch you!!
4
11
1
1
- To Do
+ Editing
75228
@@ -209,154 +219,164 @@
うそ!
そこも崩れるよ……
きみも…………逃げるッ!
-
+ You're lying!
+It'll collapse there too...
+You should... run, too!
5
12
1
1
- To Do
+ Editing
75508
S031513
くそッ、スピルーンを
戻したのが裏目に出やがった!
-
+ Dammit, giving back her Spiria
+Core shard fired back on us!
2
13
1
1
- To Do
+ Editing
75636
S031514
ボ、ボクのせいじゃないよぉ!?
-
+ I-It's not my fault!
3
14
1
1
- To Do
+ Editing
75936
S031515
いやぁぁぁッ!
怖いッ! 怖いよぉッ!!!
-
+ Noooo!
+I'm scared! I'm scared!!!
5
15
1
1
- To Do
+ Editing
76408
S031516
<Kohaku>ッ!
<Kohaku>ーーーーーッ!!
-
+ <Kohaku>!
+<Kohaku>!!!
2
16
1
1
- To Do
+ Editing
76616
S031517
<Shing>〜〜〜っ!
あんただけでも下がれぇっ!!
-
+ <Shing>!!!
+You should at least get back here!
3
17
1
1
- To Do
+ Editing
76744
S031518
嫌だ!!
ここで逃げたら……
-
+ No!!
+If I stop now...
4
18
1
1
- To Do
+ Editing
77120
S031519
二度と<Kohaku>に
信じてもらえない!!
-
+ <Kohaku> will never believe in me
+again!!
4
19
1
1
- To Do
+ Editing
78568
S031520
バ、バカ野郎!
どういうつもりだ!!
-
+ Y-You idiot! What the heck
+are you doing!!
2
20
1
1
- To Do
+ Editing
78696
S031521
ここは…………
<Shing>に任せよう
-
+ Just... leave this to <Shing>.
3
21
1
1
- To Do
+ Editing
79048
S031522
<Kohaku>、オレの手を握って。
ね……オレは逃げないよ
-
+ Here <Kohaku>, take my hand.
+...I won't run away.
4
22
1
1
- To Do
+ Editing
79488
S031523
なんで……こんな事……?
-
+ Why are... you doing this...?
5
23
1
1
- To Do
+ Editing
79616
@@ -364,13 +384,15 @@
オレ、<Beryl>にリンクして
『疑い』は『信じたい』気持ちの
裏返しなんだってわかったんだ
-
+ When I linked with <Beryl>, I understood
+that the feeling of wanting to
+believe is the other side of doubt.
4
24
1
1
- To Do
+ Editing
79616
@@ -378,39 +400,43 @@
でも、オレはバカだからさ。
命をかけるしか信用してもらう
方法が思いつかないんだ
-
+ But I'm an idiot, and the only
+way I think you'll believe me
+is if I risk my life.
4
24
2
1
- To Do
+ Editing
79836
S031526
ああ、わたし……わかんないっ!
怖い……怖いよぉ……
-
+ I... don't know!
+I'm scared... I'm scared...
5
25
1
1
- To Do
+ Editing
80072
S031527
……ごめん、きみをそんな風に
しちゃったのはオレなんだ
-
+ ...I'm sorry, I'm the one who
+made you like this.
4
26
1
1
- To Do
+ Editing
80072
@@ -418,25 +444,27 @@
だから、オレは逃げない。
命をかけて、きみを守るから
……一緒に飛ぼう!
-
+ That's why I won't run.
+I'll protect you with my life.
+...Let's jump together!
4
26
2
1
- To Do
+ Editing
80200
S031529
……<Shing>…………
-
+ <Shing>...
5
27
1
1
- To Do
+ Editing
80448
@@ -444,26 +472,29 @@
きゃぁぁぁ〜ッ!!
いやぁ! ダメッ……怖いよぉッ!
わたしもきみも……死んじゃうッ!!
-
+ Aaaaah!! No, I can't! I'm scared!
+You and me, we're both...
+going to die!
5
28
1
1
- To Do
+ Editing
80848
S031531
オレも<Kohaku>も死なないよ。
オレは知ってるんだ
-
+ We're not going to die.
+I know it.
4
29
1
1
- To Do
+ Editing
80848
@@ -471,13 +502,15 @@
<Kohaku>は、オレを……
みんなを信じてくれる
強くて優しい女の子だって
-
+ I believe in you <Kohaku>...
+that you're the strong, kind girl
+who believe in m... us!
4
29
2
1
- To Do
+ Editing
81040
@@ -485,26 +518,29 @@
みんなを信じる……わたし?
<Shing>……は……わたしを
……信じてくれる……の?
-
+ Believe in everyone... in me?
+<Shing>, you'll... believe...
+in me?
5
30
1
1
- To Do
+ Editing
81292
S031534
信じるよ。
この命とスピリアをかけて
-
+ I believe in you.
+I'd put my life and my Spiria on it.
4
31
1
1
- To Do
+ Editing
82140
@@ -512,38 +548,40 @@
あぁ……わたし……も……
<Shing>……を…………
<Shing>を………………
-
+ Then... I too... <Shing>, I...
+
5
32
1
1
- To Do
+ Editing
82564
S031536
…………信じる
-
+ ...Believe in you
5
33
1
1
- To Do
+ Editing
86292
S031537
あはは……ヤバかったぁ……
<Kohaku>、怪我はない?
-
+ Phew... that was dangerous...
+<Kohaku>, are you hurt?
4
34
1
1
- To Do
+ Editing
87548
@@ -551,26 +589,29 @@
白と黒のスピルーン……そうか
『疑い』と『信頼』は裏表——
ふたつでひとつの想いなんだね
-
+ A black and white Spiria Core shard...
+That's right, doubt and trust are
+opposites - two sides of the same coin.
4
35
1
1
- To Do
+ Editing
89332
S031539
このバカ<Shing>ッ!
てめぇ、無茶ばっかしやがって!!
-
+ <Shing>, you moron!
+Everything you did was stupid!!
2
36
1
1
- To Do
+ Editing
89536
@@ -578,26 +619,29 @@
あっ、<Hisui>!
今、初めてオレを
名前で呼んでくれたね?
-
+ Oh, <Hisui>!
+That's the first time you've ever
+used my name!
4
37
1
1
- To Do
+ Editing
90020
S031541
そ、そんな事は
どーでもいーだろうが!
-
+ T-That's got nothin' to do with
+anything!
2
38
1
1
- To Do
+ Editing
90020
@@ -605,88 +649,98 @@
……けど、てめぇが<Kohaku>を
助けたのは事実だし、一応言うぞ。
あ、ありがと…………
-
+ ...But you did save <Kohaku>,
+so I'll say it just this once.
+Th-Thanks...
2
38
2
1
- To Do
+ Editing
90432
S031543
お腹が鳴くのは……生きてる証拠
-
+ A stomach growling...
+Proof you're alive
5
39
1
1
- To Do
+ Editing
90672
S031544
うん……オレも<Kohaku>も
元気に生きてる!
-
+ Yup... You and me, we're both
+alive and well!
4
40
1
1
- To Do
+ Editing
90672
S031545
さぁ、宿に帰って
一緒に女将さんのご飯を食べよう
-
+ All right, let's head back to the inn
+and dig in to that landlady's meal.
4
40
2
1
- To Do
+ Editing
96856
ありがとう!
きみたちのおかげで
迷いがふっきれたよ!
-
+ Thanks so much!
+You guys have really helped me
+overcome my doubts!
6
41
1
1
- To Do
+ Editing
96856
よ〜し、最高の絵を描くぞぉ!
-
+ All right, I'm going to paint the
+best picture ever!
6
41
2
1
- To Do
+ Editing
96856
お礼にこれをあげる。
ここに来る途中で拾った
芸術的に美しい石だよ
-
+ Let me give this to you as thanks.
+It's an artistically beautiful stone I
+picked up on the way here.
6
41
3
1
- To Do
+ Editing
diff --git a/project.json b/project.json
index db561de..223ca13 100644
--- a/project.json
+++ b/project.json
@@ -10,13 +10,14 @@
"story_original": "./1.5_original_translated/story/",
"menu_xml": "./2_translated/menu/",
"story_xml": "./2_translated/story/",
- "skit_xml": "./2_Translated/skits/",
+ "skit_xml": "./2_translated/skits/",
+ "new_font": "./2_translated/font/",
"final_files": "./3_patched/",
"temp_files": "./3_patched/patched_temp/",
"game_builds": "./4_builds/",
"saved_files": "./tools/saved_files/",
"tools": "./tools/"
},
- "asm_file": "main.asm",
+ "asm_file": "adjust_textbox.asm",
"main_exe_name": "arm9.bin"
}
\ No newline at end of file
diff --git a/tools/__init__.py b/tools/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tools/asm/adjust_textbox.asm b/tools/asm/adjust_textbox.asm
new file mode 100644
index 0000000..7631fb5
--- /dev/null
+++ b/tools/asm/adjust_textbox.asm
@@ -0,0 +1,6 @@
+.ps2
+.open __OVERLAY3_PATH__, 0x213A4E0
+
+.org 0x217D650
+
+ .byte 00
\ No newline at end of file
diff --git a/tools/asm/armips.exe b/tools/asm/armips.exe
new file mode 100644
index 0000000..7ea0256
Binary files /dev/null and b/tools/asm/armips.exe differ
diff --git a/tools/batchs/Batch_Files.zip b/tools/batchs/Batch_Files.zip
index 9a6b881..85716c9 100644
Binary files a/tools/batchs/Batch_Files.zip and b/tools/batchs/Batch_Files.zip differ
diff --git a/tools/pythonlib/Tales_Exe.py b/tools/pythonlib/Tales_Exe.py
new file mode 100644
index 0000000..b24c589
--- /dev/null
+++ b/tools/pythonlib/Tales_Exe.py
@@ -0,0 +1,232 @@
+import argparse
+from pathlib import Path
+
+from ToolsTOH import ToolsTOH
+
+SCRIPT_VERSION = "0.0.3"
+
+
+def get_arguments(argv=None):
+ # Init argument parser
+ parser = argparse.ArgumentParser()
+
+ parser.add_argument(
+ "-g",
+ "--game",
+ choices=["TOR", "NDX", "TOH"],
+ required=True,
+ metavar="game",
+ help="Options: TOR, NDX, TOH",
+ )
+
+ parser.add_argument(
+ "-p",
+ "--project",
+ required=True,
+ type=Path,
+ metavar="project",
+ help="project.json file path",
+ )
+
+ sp = parser.add_subparsers(title="Available actions", required=False, dest="action")
+
+ # Extract commands
+ sp_extract = sp.add_parser(
+ "extract",
+ description="Extract the content of the files",
+ help="Extract the content of the files",
+ formatter_class=argparse.RawTextHelpFormatter,
+ )
+
+ sp_extract.add_argument(
+ "-ft",
+ "--file_type",
+ choices=["Iso", "Menu", "Story", "Skits", "All"],
+ required=True,
+ metavar="file_type",
+ help="(Required) - Options: Iso, Menu, Story, Skits, All",
+ )
+
+ sp_extract.add_argument(
+ "-i",
+ "--iso",
+ required=False,
+ default="../b-topndxj.iso",
+ metavar="iso",
+ help="(Optional) - Only for extract Iso command",
+ )
+
+ sp_extract.add_argument(
+ "-r",
+ "--replace",
+ required=False,
+ metavar="replace",
+ default=False,
+ help="(Optional) - Boolean to uses translations from the Repo to overwrite the one in the Data folder",
+ )
+
+ sp_extract.add_argument(
+ "--only-changed",
+ required=False,
+ action="store_true",
+ help="(Optional) - Insert only changed files not yet commited",
+ )
+
+ sp_insert = sp.add_parser(
+ "insert",
+ help="Take the new texts and recreate the files",
+ )
+
+ sp_insert.add_argument(
+ "-ft",
+ "--file_type",
+ choices=["Iso", "Main", "Menu", "Story", "Skits", "All", "Asm"],
+ required=True,
+ metavar="file_type",
+ help="(Required) - Options: Iso, Init, Main, Elf, Story, Skits, All, Asm",
+ )
+
+ sp_insert.add_argument(
+ "-i",
+ "--iso",
+ required=False,
+ default="",
+ metavar="iso",
+ help="(Deprecated) - No longer in use for insertion",
+ )
+
+ sp_insert.add_argument(
+ "-des",
+ "--des",
+ required=False,
+ default="",
+ metavar="des",
+ help="(Optional) - Specify Desmume location to use together with the saved file",
+ )
+
+ sp_insert.add_argument(
+ "-save",
+ "--save",
+ required=False,
+ default="",
+ metavar="save",
+ help="(Optional) - Specify the saved file to put in desmume folder",
+ )
+
+ sp_insert.add_argument(
+ "--with-proofreading",
+ required=False,
+ action="store_const",
+ const="Proofreading",
+ default="",
+ help="(Optional) - Insert lines in 'Proofreading' status",
+ )
+
+ sp_insert.add_argument(
+ "--with-editing",
+ required=False,
+ action="store_const",
+ const="Editing",
+ default="",
+ help="(Optional) - Insert lines in 'Editing' status",
+ )
+
+ sp_insert.add_argument(
+ "--with-problematic",
+ required=False,
+ action="store_const",
+ const="Problematic",
+ default="",
+ help="(Optional) - Insert lines in 'Problematic' status",
+ )
+
+ sp_insert.add_argument(
+ "--only-changed",
+ required=False,
+ action="store_true",
+ help="(Optional) - Insert only changed files not yet commited",
+ )
+
+ args = parser.parse_args()
+
+ return args
+
+
+def getTalesInstance(args, game_name):
+
+ if args.action == "insert":
+ insert_mask = [
+ args.with_proofreading,
+ args.with_editing,
+ args.with_problematic,
+ ]
+ else:
+ insert_mask = []
+
+ talesInstance = ToolsTOH(
+ args.project.resolve(), insert_mask, args.only_changed
+ )
+
+ return talesInstance
+
+
+if __name__ == "__main__":
+
+ args = get_arguments()
+ game_name = args.game
+ tales_instance = getTalesInstance(args, game_name)
+
+ if args.action == "insert":
+
+ if game_name == "TOH":
+
+ if args.file_type == "Menu":
+ #tales_instance.decompress_arm9()
+ tales_instance.pack_all_menu()
+ tales_instance.make_iso(Path(args.iso))
+
+ if args.file_type == "Iso":
+ tales_instance.compress_arm9()
+ tales_instance.make_iso(Path(args.iso))
+
+ elif args.file_type == "Skits":
+ tales_instance.pack_all_skits()
+
+ elif args.file_type == "Story":
+ tales_instance.pack_all_story()
+
+ elif args.file_type == "All":
+ tales_instance.pack_all_skits()
+ tales_instance.pack_all_story()
+ tales_instance.pack_all_menu()
+ tales_instance.update_font()
+ tales_instance.patch_binaries()
+ tales_instance.save_iso(Path(args.iso))
+ tales_instance.update_save_file(Path(args.des), args.save)
+
+ if args.action == "extract":
+
+ if game_name == "TOH":
+
+ if args.file_type == "Menu":
+ #tales_instance.unpack_menu_files()
+ tales_instance.extract_all_menu(keep_translations=True)
+
+ elif args.file_type == "Iso":
+ tales_instance.extract_Iso(Path(args.iso))
+ tales_instance.decompress_arm9()
+ tales_instance.decompress_overlays()
+
+ elif args.file_type == "Skits":
+ tales_instance.extract_all_skits(args.replace)
+
+ elif args.file_type == "Story":
+ tales_instance.extract_all_story(args.replace)
+
+ elif args.file_type == "All":
+ tales_instance.extract_Iso(Path(args.iso))
+ tales_instance.decompress_arm9()
+ tales_instance.decompress_overlays()
+ tales_instance.extract_all_menu(keep_translations=True)
+ tales_instance.extract_all_skits(args.replace)
+ tales_instance.extract_all_story(args.replace)
\ No newline at end of file
diff --git a/tools/pythonlib/ToolsTOH.py b/tools/pythonlib/ToolsTOH.py
new file mode 100644
index 0000000..a604a03
--- /dev/null
+++ b/tools/pythonlib/ToolsTOH.py
@@ -0,0 +1,898 @@
+import os
+import shutil
+from os import stat_result
+
+import pandas as pd
+
+from pathlib import Path
+import pyjson5 as json
+import subprocess
+import datetime
+import lxml.etree as etree
+from formats.FileIO import FileIO
+from formats.fps4 import Fps4
+from formats.tss import Tss
+from utils.dsv2sav import sav_to_dsv
+from formats.text_toh import text_to_bytes, bytes_to_text
+import re
+from itertools import chain
+import io
+from tqdm import tqdm
+import struct
+from ndspy import rom, codeCompression
+from ndspy.code import loadOverlayTable, saveOverlayTable
+
+
+class ToolsTOH():
+
+
+ def __init__(self, project_file: Path, insert_mask: list[str], changed_only: bool = False) -> None:
+ os.environ["PATH"] += os.pathsep + os.path.join( os.getcwd(), 'pythonlib', 'utils')
+ base_path = project_file.parent
+
+ if os.path.exists('programs_infos.json'):
+ json_data = json.load(open('programs_infos.json'))
+ self.desmume_path = Path(json_data['desmume_path'])
+ self.save_size = json_data['save_size']
+
+
+ self.folder_name = 'TOH'
+ self.jsonTblTags = {}
+ self.ijsonTblTags = {}
+ with open(project_file, encoding="utf-8") as f:
+ json_raw = json.load(f)
+
+ self.paths: dict[str, Path] = {k: base_path / v for k, v in json_raw["paths"].items()}
+ self.main_exe_name = json_raw["main_exe_name"]
+ self.asm_file = json_raw["asm_file"]
+
+ # super().__init__("TOR", str(self.paths["encoding_table"]), "Tales-Of-Rebirth")
+
+ with open(self.paths["encoding_table"], encoding="utf-8") as f:
+ json_raw = json.load(f)
+
+ for k, v in json_raw.items():
+ self.jsonTblTags[k] = {int(k2, 16): v2 for k2, v2 in v.items()}
+
+
+ for k, v in self.jsonTblTags.items():
+ if k in ['TAGS', 'TBL']:
+ self.ijsonTblTags[k] = {v2:k2 for k2, v2 in v.items()}
+ else:
+ self.ijsonTblTags[k] = {v2: hex(k2).replace('0x', '').upper() for k2, v2 in v.items()}
+ self.iTags = {v2.upper(): k2 for k2, v2 in self.jsonTblTags['TAGS'].items()}
+ self.id = 1
+
+ # byteCode
+ self.story_byte_code = b"\xF8"
+ self.story_struct_byte_code = [b'\x0E\x10\x00\x0C\x04', b'\x00\x10\x00\x0C\x04']
+ self.VALID_VOICEID = [r'(VSM_\w+)', r'(VCT_\w+)', r'(S\d+)', r'(C\d+)']
+ self.list_status_insertion: list[str] = ['Done']
+ self.list_status_insertion.extend(insert_mask)
+ self.COMMON_TAG = r"(<[\w/]+:?\w+>)"
+ self.changed_only = changed_only
+ self.repo_path = str(base_path)
+ self.file_dict = {
+ "skit": "data/fc/fcscr",
+ "story": "data/m"
+ }
+
+ def extract_Iso(self, game_iso: Path) -> None:
+
+ #Extract all the files
+ print("Extracting the Iso's files...")
+ extract_to = self.paths["original_files"]
+ #self.clean_folder(extract_to)
+
+ path = self.folder_name / extract_to
+ args = ['ndstool', '-x', os.path.basename(game_iso),
+ '-9', path/'arm9.bin',
+ '-7', path/'arm7.bin',
+ '-y9', path/'y9.bin',
+ '-y7', path/'y7.bin',
+ '-d', path/'data',
+ '-y', path/'overlay',
+ '-t', path/'banner.bin',
+ '-h', path/'header.bin']
+
+ wrk_dir = os.path.normpath(os.getcwd() + os.sep + os.pardir)
+ subprocess.run(args, cwd=wrk_dir, stdout = subprocess.DEVNULL)
+
+ #Update crappy arm9.bin to tinke's version
+ with open(self.folder_name / extract_to / 'arm9.bin', "rb+") as f:
+ data = f.read()
+ f.seek(len(data) - 12)
+ f.truncate()
+
+ #Copy to patched folder
+ #shutil.copytree(os.path.join('..', self.folder_name, self.paths["original_files"]), os.path.join('..', self.folder_name, self.paths["final_files"]), dirs_exist_ok=True)
+
+ def make_iso(self, game_iso) -> None:
+ #Clean old builds and create new one
+ self.clean_builds(self.paths["game_builds"])
+
+ # Set up new iso name and copy original iso in the folder
+
+ n: datetime.datetime = datetime.datetime.now()
+ new_iso = f"TalesofHearts_{n.year:02d}{n.month:02d}{n.day:02d}{n.hour:02d}{n.minute:02d}.nds"
+ print(f'Making Iso {new_iso}...')
+ self.new_iso = new_iso
+ shutil.copy(game_iso, self.paths['game_builds'] / new_iso)
+
+ path = self.folder_name / self.paths["final_files"]
+
+ args = ['ndstool', '-c', new_iso,
+ '-9', path / 'arm9.bin',
+ '-7', path / 'arm7.bin',
+ '-y9', path / 'y9.bin',
+ '-y7', path / 'y7.bin',
+ '-d', path / 'data',
+ '-y', path / 'overlay',
+ '-t', path / 'banner.bin',
+ '-h', path / 'header.bin']
+
+ subprocess.run(args, cwd=self.paths["game_builds"], stdout = subprocess.DEVNULL)
+
+ def update_font(self):
+ shutil.copyfile(self.paths['new_font'] / 'trialFont10.NFTR', self.paths['final_files'] / 'data' / 'trialFont10.NFTR')
+ shutil.copyfile(self.paths['new_font'] / 'trialFont12.NFTR', self.paths['final_files'] / 'data' / 'trialFont12.NFTR')
+
+ def patch_binaries(self):
+ asm_path = self.paths["tools"] / "asm"
+
+ env = os.environ.copy()
+ env["PATH"] = f"{asm_path.as_posix()};{env['PATH']}"
+
+ r = subprocess.run(
+ [
+ str(self.paths["tools"] / "asm" / "armips.exe"),
+ str(self.paths["tools"] / "asm" / self.asm_file),
+ "-strequ",
+ "__OVERLAY3_PATH__",
+ str(self.paths["temp_files"] / 'overlay' / 'overlay_0003.bin')
+ ])
+ if r.returncode != 0:
+ raise ValueError("Error building code")
+
+
+ def update_arm9_size(self, game_iso:Path):
+
+ with FileIO(game_iso, 'rb') as f:
+ f.seek(0x28)
+ load = f.read_uint32()
+ f.seek(0x70)
+ auto = f.read_uint32() - load
+
+ compressed_arm9_path = self.paths['final_files'] / 'arm9.bin'
+ decompressed_arm9_path = self.paths['temp_files'] / 'arm9/arm9.bin'
+ arm9_comp_size = os.path.getsize(compressed_arm9_path)
+ arm9_decomp_size = os.path.getsize(decompressed_arm9_path)
+ with FileIO(compressed_arm9_path, 'r+b') as f:
+ f.seek(auto - 4)
+ offset = f.read_uint32() - load
+
+ #1st value to update
+ f.seek(offset)
+ val1 = load + arm9_decomp_size - 0x18
+ f.write_uint32(val1)
+
+ #2nd value to update
+ f.seek(offset + 1*4)
+ val2 = load + arm9_decomp_size
+ f.write_uint32(val2)
+
+ #3rd value to update
+ f.seek(offset + 5*4)
+ val3 = load + arm9_comp_size
+ f.write_uint32(val3)
+
+ f.seek(0)
+ return f.read()
+
+ def update_overlays(self, romnds: rom, overlays_id: list):
+ table = loadOverlayTable(romnds.arm9OverlayTable, lambda x, y: bytes())
+
+ for id in overlays_id:
+ ov3 = table[id]
+ ov3.compressed = True
+
+ self.compress_overlays()
+ with open(self.paths['final_files'] / f'overlay/overlay_000{id}.bin', 'rb') as f:
+ data_compressed = f.read()
+
+ ov3.compressedSize = len(data_compressed)
+ romnds.files[ov3.fileID] = data_compressed
+
+ romnds.arm9OverlayTable = saveOverlayTable(table)
+
+
+
+ def save_iso(self, game_iso:Path):
+
+ self.clean_builds(self.paths["game_builds"])
+ n: datetime.datetime = datetime.datetime.now()
+ new_iso = f"TalesofHearts_{n.year:02d}{n.month:02d}{n.day:02d}{n.hour:02d}{n.minute:02d}.nds"
+ print(f'Replacing files in new build: {new_iso}...')
+ self.new_iso = new_iso
+
+ romnds = rom.NintendoDSRom.fromFile(game_iso)
+ path = Path(self.paths['final_files'])
+ for file in path.rglob("*"):
+
+ if file.is_file() and 'patched_temp' not in str(file) and 'overlay' not in str(file) and file.stem != ".gitignore":
+
+ with open(file, "rb") as f:
+ data = f.read()
+ if file.stem == "arm9":
+ self.compress_arm9()
+ data = self.update_arm9_size(game_iso)
+ romnds.arm9 = data
+
+ else:
+
+ i = file.parts.index('3_patched')
+ rem = file.parts[(i+1):]
+ path_file = '/'.join(rem)
+ path_file = path_file.replace('data/', '')
+ romnds.setFileByName(path_file, data)
+
+ self.update_overlays(romnds, [0,3])
+ romnds.saveToFile(self.paths['game_builds'] / self.new_iso)
+
+
+ def decompress_arm9(self):
+
+ #Copy the original file in a ARM9 folder
+ new_arm9 = self.paths['extracted_files'] / 'arm9' / 'arm9.bin'
+ new_arm9.parent.mkdir(parents=True, exist_ok=True)
+ shutil.copy(self.paths['original_files'] / 'arm9.bin', new_arm9)
+
+ #Decompress the file using blz
+ print('Decompressing Arm9...')
+ args = ['blz', '-d', new_arm9]
+ subprocess.run(args, cwd=self.paths['tools'] / 'pythonlib' / 'utils', stdout = subprocess.DEVNULL)
+
+ def compress_arm9(self):
+
+ shutil.copy(self.paths['temp_files'] / 'arm9' / 'arm9.bin', self.paths['final_files'] / 'arm9.bin')
+
+ #Copy the original file in a ARM9 folder
+
+ #Compress the file using blz
+ print('Compressing Arm9 and Overlays...')
+ args = ['blz', '-en9', self.paths['final_files'] / 'arm9.bin']
+ subprocess.run(args, cwd=self.paths['tools'] / 'pythonlib' / 'utils', stdout = subprocess.DEVNULL)
+
+ # Update crappy arm9.bin to tinke's version
+ #with open(self.paths['final_files'] / 'arm9.bin', "rb+") as f:
+ # data = f.read()
+ # f.seek(len(data) - 12)
+ #f.truncate()
+
+
+ def clean_folder(self, path: Path) -> None:
+ target_files = list(path.iterdir())
+ if len(target_files) != 0:
+ print("Cleaning folder...")
+ for file in target_files:
+ if file.is_dir():
+ shutil.rmtree(file)
+ elif file.name.lower() != ".gitignore":
+ file.unlink(missing_ok=False)
+
+ def decompress_overlays(self):
+ # Copy the original file in a ARM9 folder
+ new_overlay = self.paths['extracted_files'] / 'overlay'
+ new_overlay.mkdir(parents=True, exist_ok=True)
+ shutil.copytree(src=self.paths['original_files'] / 'overlay', dst=new_overlay, dirs_exist_ok=True)
+
+ # Decompress the file using blz
+ print('Decompressing Overlays...')
+ args = ['blz', '-d', new_overlay / 'overlay*']
+ subprocess.run(args, cwd= self.paths['tools'] / 'pythonlib' / 'utils', stdout = subprocess.DEVNULL)
+
+ def compress_overlays(self):
+
+ overlay_folder = self.paths['final_files'] / 'overlay'
+ overlay_folder.mkdir(parents=True, exist_ok=True)
+ shutil.copy(self.paths['temp_files'] / 'overlay' / 'overlay_0003.bin', overlay_folder / 'overlay_0003.bin')
+ args = ['blz', '-en', overlay_folder / 'overlay_0003.bin']
+ subprocess.run(args, cwd=self.paths['tools'] / 'pythonlib' / 'utils', stdout=subprocess.DEVNULL)
+
+ def adjusted_y9(self, overlay_name):
+
+ compressed_size = os.path.getsize(self.paths['final_files'] / 'overlay' / overlay_name)
+
+ with FileIO(self.paths['final_files'] / 'y9.bin', 'rb') as f:
+ data = f.read()
+ data[0x1C:0x1E] = compressed_size.to_bytes(3, 'little')
+ return data
+
+
+ def clean_builds(self, path: Path) -> None:
+ target_files = sorted(list(path.glob("*.nds")), key=lambda x: x.name)[:-4]
+ if len(target_files) != 0:
+ print("Cleaning builds folder...")
+ for file in target_files:
+ print(f"Deleting {str(file.name)}...")
+ file.unlink()
+
+ def update_save_file(self, desmume_path:Path, saved_file_name:str):
+
+ if saved_file_name != '':
+ destination = desmume_path / 'Battery' / saved_file_name
+ shutil.copy(self.paths['saved_files'] / saved_file_name, destination)
+
+ if saved_file_name.endswith('.sav'):
+ self.convert_sav_to_dsv(desmume_path, saved_file_name)
+
+ else:
+ new_saved_name = f"{self.new_iso.split('.')[0]}.dsv"
+ os.rename(destination, destination.parent / new_saved_name)
+
+ def convert_sav_to_dsv(self, desmume_path:Path, saved_file_name:str):
+ trimSize = 122
+ footer = [124, 60, 45, 45, 83, 110, 105, 112, 32, 97, 98, 111, 118, 101, 32, 104,
+ 101, 114, 101, 32, 116, 111, 32, 99, 114, 101, 97, 116, 101, 32, 97, 32,
+ 114, 97, 119, 32, 115, 97, 118, 32, 98, 121, 32, 101, 120, 99, 108, 117,
+ 100, 105, 110, 103, 32, 116, 104, 105, 115, 32, 68, 101, 83, 109, 117, 77,
+ 69, 32, 115, 97, 118, 101, 100, 97, 116, 97, 32, 102, 111, 111, 116, 101,
+ 114, 58, 0, 0, 1 ,0 , 0, 0, 1, 0, 3, 0, 0, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0,
+ 0, 0, 0, 124, 45, 68, 69, 83, 77, 85, 77, 69, 32, 83, 65, 86, 69, 45, 124]
+
+ sav_file = desmume_path / 'Battery' / saved_file_name
+ destination = desmume_path / 'Battery' / f"{self.new_iso.split('.')[0]}.dsv"
+ print(destination)
+ binary = bytearray(footer)
+ with open(sav_file, 'rb') as inFile:
+ with open(destination, 'wb') as outFile:
+ contents = inFile.read()
+ outFile.write(contents)
+ outFile.write(binary)
+ os.remove(sav_file)
+
+ def get_style_pointers(self, file: FileIO, ptr_range: tuple[int, int], base_offset: int, style: str) -> tuple[
+ list[int], list[int]]:
+
+ file.seek(ptr_range[0])
+ pointers_offset: list[int] = []
+ pointers_value: list[int] = []
+ split: list[str] = [ele for ele in re.split(r'([PT])|(\d+)', style) if ele]
+
+ while file.tell() < ptr_range[1]:
+ for step in split:
+ if step == "P":
+ off = file.read_uint32()
+ if base_offset != 0 and off == 0: continue
+
+ if file.tell() - 4 < ptr_range[1]:
+ pointers_offset.append(file.tell() - 4)
+ pointers_value.append(off - base_offset)
+ elif step == "T":
+ off = file.tell()
+ pointers_offset.append(off)
+ pointers_value.append(off)
+ else:
+ file.read(int(step))
+
+ return pointers_offset, pointers_value
+
+ def create_Node_XML(self, root, list_informations, section, entry_type:str, max_len = 0, ) -> None:
+ strings_node = etree.SubElement(root, 'Strings')
+ etree.SubElement(strings_node, 'Section').text = section
+
+ for text, pointer_offset, emb in list_informations:
+ self.create_entry(strings_node, pointer_offset, text, entry_type, -1, "")
+ #self.create_entry(strings_node, pointers_offset, text, emb, max_len)
+ def extract_all_menu(self, keep_translations=False) -> None:
+ #xml_path = self.paths["menu_xml"]
+ xml_path = self.paths["menu_original"]
+ xml_path.mkdir(exist_ok=True)
+
+ # Read json descriptor file
+ with open(self.paths["menu_table"], encoding="utf-8") as f:
+ menu_json = json.load(f)
+
+ for entry in tqdm(menu_json, desc='Extracting Menu Files'):
+
+ if entry["friendly_name"] == "Arm9" or entry["friendly_name"].startswith("Overlay"):
+ file_path = self.paths["extracted_files"] / entry["file_path"]
+ else:
+ file_path = self.paths["original_files"] / entry["file_path"]
+
+ with FileIO(file_path, "rb") as f:
+ xml_data = self.extract_menu_file(entry, f, keep_translations)
+
+ with open(xml_path / (entry["friendly_name"] + ".xml"), "wb") as xmlFile:
+ xmlFile.write(xml_data)
+
+ self.id = 1
+ def extract_menu_file(self, file_def, f: FileIO, keep_translations=False) -> bytes:
+
+ base_offset = file_def["base_offset"]
+ xml_root = etree.Element("MenuText")
+
+ for section in file_def['sections']:
+ max_len = 0
+ pointers_offset = []
+ pointers_value = []
+ if "pointers_start" in section.keys():
+ pointers_start = int(section["pointers_start"])
+ pointers_end = int(section["pointers_end"])
+
+ # Extract Pointers list out of the file
+ pointers_offset, pointers_value = self.get_style_pointers(f, (pointers_start, pointers_end), base_offset,
+ section['style'])
+ if 'pointers_alone' in section.keys():
+ for ele in section['pointers_alone']:
+ f.seek(ele, 0)
+ pointers_offset.append(f.tell())
+ off = f.read_uint32() - base_offset
+ pointers_value.append(off)
+
+ #print([hex(pointer_off) for pointer_off in pointers_offset])
+ # Make a list, we also merge the emb pointers with the
+ # other kind in the case they point to the same text
+ temp = dict()
+ for off, val in zip(pointers_offset, pointers_value):
+ text, buff = bytes_to_text(f, val)
+ temp.setdefault(text, dict()).setdefault("ptr", []).append(off)
+
+ # Remove duplicates
+ list_informations = [(k, str(v['ptr'])[1:-1], v.setdefault('emb', None)) for k, v in temp.items()]
+
+ # Build the XML Structure with the information
+ if 'style' in section.keys() and section['style'][0] == "T": max_len = int(section['style'][1:])
+ self.create_Node_XML(xml_root, list_informations, section['section'], "String", max_len)
+
+ if keep_translations:
+ self.copy_translations_menu(root_original=xml_root, translated_path=self.paths['menu_xml'] / f"{file_def['friendly_name']}.xml")
+
+ # Write to XML file
+ return etree.tostring(xml_root, encoding="UTF-8", pretty_print=True)
+
+ def parse_entry(self, xml_node):
+
+ jap_text = xml_node.find('JapaneseText').text
+ eng_text = xml_node.find('EnglishText').text
+ status = xml_node.find('Status').text
+ notes = xml_node.find('Notes').text
+
+ final_text = eng_text or jap_text or ''
+ return jap_text, eng_text, final_text, status, notes
+
+ def copy_translations_menu(self, root_original, translated_path: Path):
+
+ if translated_path.exists():
+
+ original_entries = {entry_node.find('JapaneseText').text: (section.find('Section').text,) +
+ self.parse_entry(entry_node) for section in
+ root_original.findall('Strings') for entry_node in section.findall('Entry')}
+
+ tree = etree.parse(translated_path)
+ root_translated = tree.getroot()
+ translated_entries = {entry_node.find('JapaneseText').text: (section.find('Section').text,) +
+ self.parse_entry(entry_node) for section in
+ root_translated.findall('Strings') for entry_node in section.findall('Entry')}
+
+
+ for entry_node in root_original.iter('Entry'):
+
+ jap_text = entry_node.find('JapaneseText').text
+
+ if jap_text in translated_entries:
+
+ translated = translated_entries[jap_text]
+
+ if translated_entries[jap_text][2] is not None:
+ entry_node.find('EnglishText').text = translated_entries[jap_text][2]
+ entry_node.find('Status').text = translated_entries[jap_text][4]
+ entry_node.find('Notes').text = translated_entries[jap_text][5]
+
+ else:
+ t = 2
+ #print(f'String: {jap_text} was not found in translated XML')
+
+ #[print(f'{entry} was not found in original') for entry, value in translated_entries.items() if entry not in original_entries and entry is not None]
+
+ def unpack_menu_files(self):
+ base_path = self.paths['extracted_files'] / 'data/menu'/ 'monsterbook'
+ fps4 = Fps4(detail_path=self.paths['original_files'] / 'data/menu' / 'monsterbook' / 'EnemyIcon.dat',
+ header_path=self.paths['original_files'] / 'data/menu' / 'monsterbook' / 'EnemyIcon.b')
+ fps4.extract_files(base_path, decompressed=False)
+
+ for file in fps4.files:
+ file_path = self.paths['extracted_files'] / 'data/menu/monsterbook/' / file.name
+ enemy_fps4 = Fps4(header_path=file_path)
+ print(file_path.with_suffix(''))
+ enemy_fps4.extract_files(file_path.with_suffix(''), decompressed=True)
+
+
+ def pack_all_menu(self) -> None:
+ xml_path = self.paths["menu_xml"]
+
+ # Read json descriptor file
+ with open(self.paths["menu_table"], encoding="utf-8") as f:
+ menu_json = json.load(f)
+
+ for entry in tqdm(menu_json, total=len(menu_json), desc='Inserting Menu Files'):
+
+
+ if entry["friendly_name"] in ['Arm9', 'Consumables', 'Sorma Skill', 'Outline', 'Overlay 0', 'Overlay 3', 'Soma Data', 'Strategy', 'Battle Memo']:
+ # Copy original files
+
+ orig = self.paths["extracted_files"] / entry["file_path"]
+ if not orig.exists():
+ orig = self.paths["original_files"] / entry["file_path"]
+
+ dest = self.paths["temp_files"] / entry["file_path"]
+ dest.parent.mkdir(parents=True, exist_ok=True)
+ shutil.copyfile(orig, dest)
+
+ base_offset = entry["base_offset"]
+ pools: list[list[int]] = [[x[0], x[1] - x[0]] for x in entry["safe_areas"]]
+ pools.sort(key=lambda x: x[1])
+
+ with open(xml_path / (entry["friendly_name"] + ".xml"), "r", encoding='utf-8') as xmlFile:
+ root = etree.fromstring(xmlFile.read(), parser=etree.XMLParser(recover=True))
+
+ with open(dest, "rb") as f:
+ file_b = f.read()
+
+ with FileIO(file_b, "wb") as f:
+ self.pack_menu_file(root, pools, base_offset, f,entry['pad'])
+
+ f.seek(0)
+ dest.parent.mkdir(parents=True, exist_ok=True)
+ with open(dest, "wb") as g:
+ g.write(f.read())
+
+ #Copy in the patched folder
+ if entry['friendly_name'] != "Arm9":
+ (self.paths['final_files'] / entry['file_path']).parent.mkdir(parents=True, exist_ok=True)
+ shutil.copyfile(src=dest,
+ dst=self.paths['final_files'] / entry['file_path'])
+ else:
+ shutil.copyfile(src=dest,
+ dst=self.paths['final_files'] / 'arm9.bin')
+
+ def pack_menu_file(self, root, pools: list[list[int]], base_offset: int, f: FileIO, pad=False) -> None:
+
+ if root.find("Strings").find("Section").text == "Arm9":
+ min_seq = 400
+ entries = [ele for ele in root.iter("Entry") if
+ ele.find('PointerOffset').text not in ['732676', '732692', '732708']
+ and int(ele.find('Id').text) <= min_seq]
+ else:
+ entries = root.iter("Entry")
+
+ for line in entries:
+ hi = []
+ lo = []
+ flat_ptrs = []
+
+ p = line.find("EmbedOffset")
+ if p is not None:
+ hi = [int(x) - base_offset for x in p.find("hi").text.split(",")]
+ lo = [int(x) - base_offset for x in p.find("lo").text.split(",")]
+
+ poff = line.find("PointerOffset")
+ if poff.text is not None:
+ flat_ptrs = [int(x) for x in poff.text.split(",")]
+
+ mlen = line.find("MaxLength")
+ if mlen is not None:
+ max_len = int(mlen.text)
+ f.seek(flat_ptrs[0])
+ text_bytes = self.get_node_bytes(line,pad) + b"\x00"
+ if len(text_bytes) > max_len:
+ tqdm.write(
+ f"Line id {line.find('Id').text} ({line.find('JapaneseText').text}) too long, truncating...")
+ f.write(text_bytes[:max_len - 1] + b"\x00")
+ else:
+ f.write(text_bytes + (b"\x00" * (max_len - len(text_bytes))))
+ continue
+
+ text_bytes = self.get_node_bytes(line,pad) + b"\x00"
+
+ l = len(text_bytes)
+ for pool in pools:
+
+ if l <= pool[1]:
+ str_pos = pool[0]
+ #print(f'offset in pool: {hex(pool[0])}')
+ pool[0] += l;
+ pool[1] -= l
+
+ break
+ else:
+ print("Ran out of space")
+ raise ValueError(f'Ran out of space in file: {root.find("Strings").find("Section").text}')
+
+ f.seek(str_pos)
+ f.write(text_bytes)
+ virt_pos = str_pos + base_offset
+ for off in flat_ptrs:
+ f.write_uint32_at(off, virt_pos)
+
+ for _h, _l in zip(hi, lo):
+ val_hi = (virt_pos >> 0x10) & 0xFFFF
+ val_lo = (virt_pos) & 0xFFFF
+
+ # can't encode the lui+addiu directly
+ if val_lo >= 0x8000: val_hi += 1
+
+ f.write_uint16_at(_h, val_hi)
+ f.write_uint16_at(_l, val_lo)
+
+
+ def get_node_bytes(self, entry_node, pad=False) -> bytes:
+
+ # Grab the fields from the Entry in the XML
+ #print(entry_node.find("JapaneseText").text)
+ status = entry_node.find("Status").text
+ japanese_text = entry_node.find("JapaneseText").text
+ english_text = entry_node.find("EnglishText").text
+
+ # Use the values only for Status = Done and use English if non-empty
+ final_text = ''
+ if (status in self.list_status_insertion):
+ final_text = english_text or ''
+ else:
+ final_text = japanese_text or ''
+
+ voiceid_node = entry_node.find("VoiceId")
+
+ if voiceid_node is not None:
+ final_text = f'<{voiceid_node.text}>' + final_text
+
+ # Convert the text values to bytes using TBL, TAGS, COLORS, ...
+ bytes_entry = text_to_bytes(final_text)
+
+ #Pad with 00
+ if pad:
+ rest = 4 - len(bytes_entry) % 4 - 1
+ bytes_entry += (b'\x00' * rest)
+
+ return bytes_entry
+
+ def extract_all_skits(self, keep_translations=False):
+ type = 'skit'
+ base_path = self.paths['extracted_files'] / self.file_dict[type]
+ base_path.mkdir(parents=True, exist_ok=True)
+ fps4 = Fps4(detail_path=self.paths['original_files'] / 'data' / 'fc' / 'fcscr.dat',
+ header_path=self.paths['original_files'] / 'data' / 'fc' / 'fcscr.b')
+ fps4.extract_files(destination_path=base_path, copy_path=self.paths['temp_files'] / self.file_dict['skit'], decompressed=True)
+
+ self.paths['skit_xml'].mkdir(parents=True, exist_ok=True)
+ self.paths['skit_original'].mkdir(parents=True, exist_ok=True)
+ for tss_file in tqdm(base_path.iterdir(), desc='Extracting Skits Files...'):
+ tss_obj = Tss(tss_file, list_status_insertion=self.list_status_insertion)
+ if len(tss_obj.struct_dict) > 0:
+ tss_obj.extract_to_xml(original_path=self.paths['skit_original'] / tss_file.with_suffix('.xml').name,
+ translated_path=self.paths['skit_xml'] / tss_file.with_suffix('.xml').name,
+ keep_translations=keep_translations)
+
+ def pack_tss(self, destination_path:Path, xml_path:Path):
+ tss = Tss(path=destination_path,
+ list_status_insertion=self.list_status_insertion)
+
+ tss.pack_tss_file(destination_path=destination_path,
+ xml_path=xml_path)
+
+ def pack_all_skits(self):
+ type = 'skit'
+
+ fps4 = Fps4(detail_path=self.paths['original_files'] / 'data' / 'fc' / 'fcscr.dat',
+ header_path=self.paths['original_files'] / 'data' / 'fc' / 'fcscr.b')
+
+ xml_list, archive_list = self.find_changes('skit')
+
+ #Repack TSS files
+ for archive in tqdm(archive_list, total=len(archive_list), desc="Inserting Skits Files..."):
+ end_name = f"{self.file_dict[type]}/{archive}.FCBIN"
+ src = self.paths['extracted_files'] / end_name
+ tss_path = self.paths['temp_files'] / end_name
+ shutil.copy(src=src,
+ dst=tss_path)
+ self.pack_tss(destination_path=tss_path,
+ xml_path=self.paths['skit_xml'] / f'{archive}.xml')
+
+ args = ['lzss', '-evn', tss_path]
+ subprocess.run(args, stdout=subprocess.DEVNULL)
+
+ #Repack FPS4 archive
+ final_path = self.paths['final_files'] / 'data' / 'fc'
+ final_path.mkdir(parents=True, exist_ok=True)
+ fps4.pack_file(updated_file_path=self.paths['temp_files'] / self.file_dict[type],
+ destination_folder=final_path)
+
+
+ def pack_mapbin_story(self, file_name, type):
+ mapbin_folder = self.paths['temp_files'] / self.file_dict[type] / file_name
+
+ fps4_mapbin = Fps4(detail_path=self.paths['extracted_files'] / self.file_dict[type] / f'{file_name}.MAPBIN',
+ header_path=self.paths['extracted_files'] / self.file_dict[type] / f'{file_name}.B')
+
+ fps4_mapbin.pack_fps4_type1(updated_file_path=mapbin_folder,
+ destination_folder=self.paths['temp_files'] / self.file_dict[type])
+ def pack_all_story(self):
+ type = 'story'
+ # Copy original TSS files in the "updated" folder
+ dest = self.paths['temp_files'] / self.file_dict[type]
+
+ #Repack all the TSS that need to be updated based on status changed
+ xml_list, archive_list = self.find_changes('story')
+
+ if len(xml_list) > 0:
+ for xml_path in tqdm(xml_list, total=len(xml_list), desc='Inserting Story Files'):
+
+ if os.path.exists(xml_path):
+ archive_name = xml_path.stem if not xml_path.stem.endswith('P') else xml_path.stem[0:-1]
+ end_name = f"{self.file_dict['story']}/{archive_name}/{xml_path.stem}.SCP"
+ src = self.paths['extracted_files'] / end_name
+ tss_path = self.paths['temp_files'] / end_name
+ shutil.copy(src=src,
+ dst=tss_path)
+ self.pack_tss(destination_path=tss_path,
+ xml_path=xml_path)
+
+ args = ['lzss', '-evn', tss_path]
+ subprocess.run(args, stdout = subprocess.DEVNULL)
+
+ # Find all the xmls that has changed recently
+ for archive in archive_list:
+ self.pack_mapbin_story(archive, type)
+
+ folder = 'm'
+ base_path = self.paths['extracted_files'] / 'data' / folder
+ (self.paths['final_files'] / self.file_dict[type]).mkdir(parents=True, exist_ok=True)
+ fps4_m = Fps4(detail_path=self.paths['original_files'] / self.file_dict['story'] / f'{folder}.dat',
+ header_path=self.paths['original_files'] / self.file_dict['story'] / f'{folder}.b')
+ fps4_m.pack_fps4_type1(updated_file_path=self.paths['temp_files'] / self.file_dict[type],
+ destination_folder=self.paths['final_files'] / self.file_dict[type])
+
+ def find_changes(self, type):
+
+ xml_list = []
+ archive_list = []
+ for xml_path in [path for path in self.paths[f'{type}_xml'].iterdir() if 'git' not in path.name]:
+ tree = etree.parse(xml_path)
+ root = tree.getroot()
+ entries_translated = [entry for entry in root.iter('Entry') if entry.find('Status').text in self.list_status_insertion]
+
+
+ if len(entries_translated) > 0:
+ archive_name = xml_path.stem if not xml_path.stem.endswith('P') else xml_path.stem[0:-1]
+ xml_list.append(xml_path)
+ archive_list.append(archive_name)
+
+ archive_list = list(set(archive_list))
+
+ return xml_list, archive_list
+
+ def extract_tss(self, tss_file:Path, file_type:str, keep_translations=False):
+ tss_obj = Tss(path=tss_file, list_status_insertion=self.list_status_insertion)
+
+ if (len(tss_obj.struct_dict) > 0) or (len(tss_obj.string_list) > 0):
+ original_path = self.paths[f'{file_type}_original'] / tss_file.with_suffix('.xml').name
+ translated_path = self.paths[f'{file_type}_xml'] / tss_file.with_suffix('.xml').name
+ tss_obj.extract_to_xml(original_path= original_path,
+ translated_path=translated_path,
+ keep_translations=keep_translations)
+
+
+ def extract_all_story(self, extract_XML=False):
+ folder = 'm'
+ base_path = self.paths['extracted_files'] / 'data' / folder
+
+ fps4 = Fps4(detail_path=self.paths['original_files'] / 'data' / folder / f'{folder}.dat',
+ header_path=self.paths['original_files'] / 'data' / folder / f'{folder}.b')
+ copy_path = self.paths['temp_files'] / self.file_dict['story']
+ fps4.extract_files(destination_path=base_path, copy_path=copy_path)
+
+ self.paths['story_xml'].mkdir(parents=True, exist_ok=True)
+ self.paths['story_original'].mkdir(parents=True, exist_ok=True)
+ scp_files = [file for file in base_path.iterdir() if file.suffix == '.MAPBIN']
+ for file in tqdm(scp_files, total=len(scp_files), desc=f"Extracting Story Files"):
+
+
+ file_header = file.with_suffix('.B')
+ fps4_tss = Fps4(detail_path=file, header_path=file_header)
+ folder_path = file.with_suffix('')
+ folder_path.mkdir(parents=True, exist_ok=True)
+ fps4_tss.extract_files(destination_path=folder_path, copy_path=copy_path / file.stem, decompressed=True)
+
+ #Load the tss file
+ for tss_file in [file_path for file_path in folder_path.iterdir() if file_path.suffix == '.SCP']:
+ self.extract_tss(tss_file, 'story')
+
+
+
+
+ def create_entry(self, strings_node, pointer_offset, text, entry_type, speaker_id, unknown_pointer):
+
+ # Add it to the XML node
+ entry_node = etree.SubElement(strings_node, "Entry")
+ etree.SubElement(entry_node, "PointerOffset").text = str(pointer_offset).replace(' ', '')
+ text_split = re.split(self.COMMON_TAG, text)
+
+ if len(text_split) > 1 and any(possible_value in text for possible_value in self.VALID_VOICEID):
+ etree.SubElement(entry_node, "VoiceId").text = text_split[1]
+ etree.SubElement(entry_node, "JapaneseText").text = ''.join(text_split[2:])
+ else:
+ etree.SubElement(entry_node, "JapaneseText").text = text
+
+ etree.SubElement(entry_node, "EnglishText")
+ etree.SubElement(entry_node, "Notes")
+
+ if entry_type == "Struct":
+ etree.SubElement(entry_node, "StructId").text = str(self.struct_id)
+ etree.SubElement(entry_node, "SpeakerId").text = str(speaker_id)
+
+ etree.SubElement(entry_node, "Id").text = str(self.id)
+ etree.SubElement(entry_node, "Status").text = "To Do"
+ self.id += 1
+ def extract_from_string(self, f, strings_offset, pointer_offset, text_offset, root):
+
+ f.seek(text_offset, 0)
+ japText, buff = bytes_to_text(f, text_offset)
+ self.create_entry(root, pointer_offset, japText, "Other Strings", -1, "")
+
+
+
+ def text_to_bytes(self, text):
+ multi_regex = (self.HEX_TAG + "|" + self.COMMON_TAG + r"|(\n)")
+ tokens = [sh for sh in re.split(multi_regex, text) if sh]
+
+ output = b''
+ for t in tokens:
+ # Hex literals
+ if re.match(self.HEX_TAG, t):
+ output += struct.pack("B", int(t[1:3], 16))
+
+ # Tags
+ elif re.match(self.COMMON_TAG, t):
+ tag, param, *_ = t[1:-1].split(":") + [None]
+
+ if tag == "icon":
+ output += struct.pack("B", self.ijsonTblTags["TAGS"].get(tag))
+ output += b'\x28' + struct.pack('B', int(param)) + b'\x29'
+
+ elif any(re.match(possible_value, tag) for possible_value in self.VALID_VOICEID):
+ output += b'\x09\x28' + tag.encode("cp932") + b'\x29'
+
+ elif tag == "Bubble":
+ output += b'\x0C'
+
+ else:
+ if tag in self.ijsonTblTags["TAGS"]:
+ output += struct.pack("B", self.ijsonTblTags["TAGS"][tag])
+ continue
+
+ for k, v in self.ijsonTblTags.items():
+ if tag in v:
+ if k in ['NAME', 'COLOR']:
+ output += struct.pack('B',self.iTags[k]) + b'\x28' + bytes.fromhex(v[tag]) + b'\x29'
+ break
+ else:
+ output += b'\x81' + bytes.fromhex(v[tag])
+
+ # Actual text
+ elif t == "\n":
+ output += b"\x0A"
+ else:
+ for c in t:
+ if c in self.PRINTABLE_CHARS or c == "\u3000":
+ output += c.encode("cp932")
+ else:
+
+ if c in self.ijsonTblTags["TBL"].keys():
+ b = self.ijsonTblTags["TBL"][c].to_bytes(2, 'big')
+ output += b
+ else:
+ output += c.encode("cp932")
+
+
+
+ return output
diff --git a/tools/pythonlib/__init__.py b/tools/pythonlib/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tools/pythonlib/formats/FileIO.py b/tools/pythonlib/formats/FileIO.py
new file mode 100644
index 0000000..3eaff52
--- /dev/null
+++ b/tools/pythonlib/formats/FileIO.py
@@ -0,0 +1,279 @@
+import io
+import struct
+from io import BytesIO
+from pathlib import Path
+from typing import Union
+
+
+class FileIO(object):
+ def __init__(self, path: Union[Path, str, BytesIO, bytes], mode="r+b", endian="little"):
+ self.mode: str = mode
+ self._isBitesIO = False
+ if type(path) is bytes:
+ self.path = None
+ self.f = path # type: ignore
+ self.is_memory_file = True
+ elif type(path) is BytesIO:
+ self.path = None
+ self.f = path
+ self._isBitesIO = True
+ self.is_memory_file = True
+ else:
+ self.path = path
+ self.is_memory_file = False
+ self.endian = "<" if endian == "little" or endian == "<" else ">"
+
+ def __enter__(self):
+ if self.is_memory_file:
+ self.f: io.BufferedIOBase = self.f if self._isBitesIO else BytesIO(self.f) # type: ignore
+ else:
+ self.f:io.BufferedIOBase = open(self.path, self.mode) # type: ignore
+ self.f.seek(0)
+ return self
+
+ def __exit__(self, exc_type, exc_value, traceback):
+ self.f.close()
+
+ def close(self):
+ self.f.close()
+
+ def tell(self):
+ return self.f.tell()
+
+ def truncate(self, size=None):
+ if size is None:
+ self.f.truncate(self.f.tell())
+ else:
+ self.f.truncate(size)
+
+ def seek(self, pos, whence=0):
+ self.f.seek(pos, whence)
+
+ def read(self, n=-1):
+ return self.f.read(n)
+
+ def read_at(self, pos, n=-1):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read(n)
+ self.seek(current)
+ return ret
+
+ def write(self, data):
+ self.f.write(data)
+
+ def write_at(self, pos, data):
+ current = self.tell()
+ self.seek(pos)
+ self.write(data)
+ self.seek(current)
+
+ def peek(self, n):
+ pos = self.tell()
+ ret = self.read(n)
+ self.seek(pos)
+ return ret
+
+ def write_line(self, data):
+ self.f.write(data + "\n")
+
+ def set_endian(self, endian):
+ self.endian = "<" if endian == "little" or endian == "<" else ">"
+
+ def read_int8(self):
+ return struct.unpack("b", self.read(1))[0]
+
+ def read_int8_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_int8()
+ self.seek(current)
+ return ret
+
+ def read_uint8(self):
+ return struct.unpack("B", self.read(1))[0]
+
+ def read_uint8_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_uint8()
+ self.seek(current)
+ return ret
+
+ def read_int16(self):
+ return struct.unpack(self.endian + "h", self.read(2))[0]
+
+ def read_int16_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_int16()
+ self.seek(current)
+ return ret
+
+ def read_uint16(self):
+ return struct.unpack(self.endian + "H", self.read(2))[0]
+
+ def read_uint16_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_uint16()
+ self.seek(current)
+ return ret
+
+ def read_int32(self):
+ return struct.unpack(self.endian + "i", self.read(4))[0]
+
+ def read_int32_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_int32()
+ self.seek(current)
+ return ret
+
+ def read_uint32(self):
+ return struct.unpack(self.endian + "I", self.read(4))[0]
+
+ def read_uint32_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_uint32()
+ self.seek(current)
+ return ret
+
+ def read_int64(self):
+ return struct.unpack(self.endian + "q", self.read(8))[0]
+
+ def read_int64_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_int64()
+ self.seek(current)
+ return ret
+
+ def read_uint64(self):
+ return struct.unpack(self.endian + "Q", self.read(8))[0]
+
+ def read_uint64_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_uint64()
+ self.seek(current)
+ return ret
+
+ def read_single(self):
+ return struct.unpack(self.endian + "f", self.read(4))[0]
+
+ def read_single_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_single()
+ self.seek(current)
+ return ret
+
+ def read_double(self):
+ return struct.unpack(self.endian + "d", self.read(8))[0]
+
+ def read_double_at(self, pos):
+ current = self.tell()
+ self.seek(pos)
+ ret = self.read_double()
+ self.seek(current)
+ return ret
+
+ def skip_padding(self, alignment):
+ while self.tell() % alignment != 0:
+ self.read_uint8()
+
+ def write_int8(self, num):
+ self.f.write(struct.pack("b", num))
+
+ def write_int8_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_int8(num)
+ self.seek(current)
+
+ def write_uint8(self, num):
+ self.f.write(struct.pack("B", num))
+
+ def write_uint8_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_uint8(num)
+ self.seek(current)
+
+ def write_int16(self, num):
+ self.f.write(struct.pack(self.endian + "h", num))
+
+ def write_int16_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_int16(num)
+ self.seek(current)
+
+ def write_uint16(self, num):
+ self.f.write(struct.pack(self.endian + "H", num))
+
+ def write_uint16_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_uint16(num)
+ self.seek(current)
+
+ def write_int32(self, num):
+ self.f.write(struct.pack(self.endian + "i", num))
+
+ def write_int32_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_int32(num)
+ self.seek(current)
+
+ def write_uint32(self, num):
+ self.f.write(struct.pack(self.endian + "I", num))
+
+ def write_uint32_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_uint32(num)
+ self.seek(current)
+
+ def write_int64(self, num):
+ self.f.write(struct.pack(self.endian + "q", num))
+
+ def write_int64_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_int64(num)
+ self.seek(current)
+
+ def write_uint64(self, num):
+ self.f.write(struct.pack(self.endian + "Q", num))
+
+ def write_uint64_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_uint64(num)
+ self.seek(current)
+
+ def write_single(self, num):
+ self.f.write(struct.pack(self.endian + "f", num))
+
+ def write_single_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_single(num)
+ self.seek(current)
+
+ def write_double(self, num):
+ self.f.write(struct.pack(self.endian + "d", num))
+
+ def write_double_at(self, pos, num):
+ current = self.tell()
+ self.seek(pos)
+ self.write_double(num)
+ self.seek(current)
+
+ def write_padding(self, alignment, pad_byte=0x00):
+ while self.tell() % alignment != 0:
+ self.write_uint8(pad_byte)
diff --git a/tools/pythonlib/formats/__init__.py b/tools/pythonlib/formats/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tools/pythonlib/formats/fps4.py b/tools/pythonlib/formats/fps4.py
new file mode 100644
index 0000000..808c8ee
--- /dev/null
+++ b/tools/pythonlib/formats/fps4.py
@@ -0,0 +1,216 @@
+import shutil
+from dataclasses import dataclass
+import struct
+from typing import Optional
+from .FileIO import FileIO
+import os
+from pathlib import Path
+import subprocess
+
+
+@dataclass
+class fps4_file():
+ c_type:str
+ data:bytes
+ name:str
+ size:int
+ rank:int
+ offset:int
+
+dict_buffer = {
+ 0x2C:16,
+ 0x28:32
+}
+
+class Fps4():
+
+ def __init__(self, header_path:Path , detail_path:Path = None, size_adjusted = 0) -> None:
+ self.type = -1
+ self.align = False
+ self.files = []
+ self.header_path = header_path
+ self.file_size = os.path.getsize(header_path)
+ self.detail_path = detail_path or header_path
+ self.size_adjusted = size_adjusted
+
+ self.extract_information()
+
+ def extract_information(self):
+ with FileIO(self.header_path) as f_header:
+ self.header_data = f_header.read()
+ f_header.seek(4,0)
+ self.file_amount = f_header.read_uint32()-1
+ self.header_size = f_header.read_uint32()
+ self.offset = f_header.read_uint32()
+ self.block_size = f_header.read_uint16()
+
+ self.files = []
+
+
+ if self.offset == 0x0:
+ self.pack_file = self.pack_fps4_type1
+
+ if self.block_size == 0x2C:
+ self.read_more = True
+ self.extract_type1_fps4(f_header=f_header)
+
+ elif self.block_size == 0x28:
+ self.read_more = False
+ self.extract_type1_fps4(f_header=f_header)
+
+ else:
+ self.pack_file = self.pack_fps4_type1
+ self.extract_type2_fps4(f_header=f_header)
+ #Type 2 = Header with File offset
+ def extract_type2_fps4(self, f_header:FileIO):
+
+ #Read all the files offsets
+ files_offset = []
+ f_header.seek(self.header_size,0)
+ for _ in range(self.file_amount):
+ files_offset.append(f_header.read_uint32())
+ files_offset.append(self.file_size)
+
+ #Create each file
+ for i in range(len(files_offset)-1):
+ f_header.seek(files_offset[i], 0)
+ size = files_offset[i+1] - files_offset[i]
+ data = f_header.read(size)
+ c_type = 'None'
+
+ if data[0] == 0x10:
+ c_type = 'LZ10'
+ elif data[0] == 0x11:
+ c_type = 'LZ11'
+
+ self.files.append(fps4_file(c_type, data, f'{i}.bin', size, i, files_offset[i]))
+
+
+ #Type 1 = Header + Detail
+ def extract_type1_fps4(self, f_header:FileIO):
+ self.type = 1
+ files_infos = []
+ f_header.seek(self.header_size, 0)
+
+ with FileIO(self.detail_path) as det:
+ for _ in range(self.file_amount):
+ offset = f_header.read_uint32()
+ size = f_header.read_uint32()
+
+ if self.read_more:
+ f_header.read_uint32()
+ name = f_header.read(32).decode("ASCII").strip('\x00')
+ files_infos.append((offset, size, name))
+
+ i=0
+ for offset, size, name in files_infos:
+ #print(f'name: {name} - size: {size}')
+
+ det.seek(offset)
+ data = det.read(size)
+
+ c_type = 'None'
+ if data[0] == 0x10:
+ c_type = 'LZ10'
+ elif data[0] == 0x11:
+ c_type = 'LZ11'
+
+ self.files.append(fps4_file(c_type, data, name, size, i, offset))
+ i+=1
+
+ def extract_files(self, destination_path:Path, copy_path:Path, decompressed=False):
+
+ destination_path.mkdir(parents=True, exist_ok=True)
+ for file in self.files:
+
+ copy_path.mkdir(parents=True, exist_ok=True)
+ with open(destination_path / file.name, "wb") as f:
+ f.write(file.data)
+
+ shutil.copy(destination_path / file.name, copy_path / file.name)
+
+ #Decompress using LZ10 or LZ11
+ if decompressed:
+ args = ['lzss', '-d', destination_path / file.name]
+ subprocess.run(args, cwd=Path.cwd() / 'tools/pythonlib/utils', stdout = subprocess.DEVNULL)
+
+ with open(destination_path / file.name, 'rb') as f:
+ head = f.read(4)
+
+ if head == b'FPS4':
+ file.file_extension = 'FPS4'
+ elif head[:-1] == b'TSS':
+ file.file_extension = 'TSS'
+
+ def compress_file(self, updated_file_path:Path, file_name:str, c_type:str):
+ args = []
+ if c_type == 'LZ10':
+ args = ['lzss', '-evn', updated_file_path / file_name]
+ elif c_type == "LZ11":
+ args = ['lzx', '-evb', updated_file_path / file_name]
+ subprocess.run(args, cwd=Path.cwd() / 'tools/pythonlib/utils', stdout = subprocess.DEVNULL)
+
+ def pack_fps4_type1(self, updated_file_path:Path, destination_folder:Path):
+ buffer = 0
+
+ #Update detail file
+ self.files.sort(key= lambda file: file.offset)
+ with FileIO(destination_folder / self.detail_path.name, "wb") as fps4_detail:
+
+ #Writing new dat file and updating file attributes
+ for file in self.files:
+ #self.compress_file(updated_file_path, file_name=file.name, c_type=file.c)
+
+ with FileIO(updated_file_path / file.name, 'rb') as sub_file:
+ file.data = sub_file.read()
+ file.offset = buffer
+ file.size = len(file.data)
+ buffer += file.size
+ fps4_detail.write(file.data)
+
+ #Update header file
+ with FileIO(self.header_data, "r+b") as fps4_header:
+
+ fps4_header.seek(self.header_size,0)
+ self.files.sort(key= lambda file: file.rank)
+ for file in self.files:
+ fps4_header.write(struct.pack(' 0:
+ print(f'header was found: {header_found[0]}')
+ return path.parent / header_found[0]
+ else:
+ return None
+
+
+ #def get_fps4_type(self):
diff --git a/tools/pythonlib/formats/structnode.py b/tools/pythonlib/formats/structnode.py
new file mode 100644
index 0000000..41e5b9b
--- /dev/null
+++ b/tools/pythonlib/formats/structnode.py
@@ -0,0 +1,303 @@
+from .FileIO import FileIO
+from .text_toh import text_to_bytes, bytes_to_text
+import struct
+class Bubble:
+
+ def __init__(self, id:int, jap_text:str, eng_text:str, status:str = 'To Do', bytes = b''):
+ self.eng_text = eng_text
+ self.jap_text = jap_text
+ self.id = id
+ self.status = status
+ self.bytes = bytes
+
+class StructEntry:
+
+ def __init__(self, pointer_offset:int, text_offset:int):
+ self.pointer_offset = pointer_offset
+ self.text_offset = text_offset
+ self.pointer_offset_list = []
+ self.sub_id = 1
+ self.bubble_list = []
+
+
+
+class StructNode:
+ def __init__(self, id: int, pointer_offset: int, text_offset: int, tss: FileIO, strings_offset:int, file_size:int, section:str):
+ self.id = id
+ self.pointer_offset = pointer_offset
+ self.pointer_offset_str = ''
+ self.text_offset = text_offset
+ self.strings_offset = strings_offset
+ self.file_size = file_size
+ self.section = section
+ self.nb_unknowns = 0
+ self.unknowns = []
+ self.texts_entry = []
+ self.end_unknowns = []
+ self.speaker = Speaker(0,0)
+ self.speaker.text = 'Variable'
+
+ self.extract_struct_information(tss)
+
+ def find_unknowns(self, tss:FileIO):
+
+ first_entry = tss.read_uint32()
+ second_entry = tss.read_uint32()
+ third_entry = tss.read_uint32()
+
+ if self.section == "Story":
+ self.nb_unknowns = 2
+ else:
+
+ if 0x0 <= first_entry <= 0x20 and 0x0 <= second_entry <= 0x20:
+ tss.seek(self.text_offset)
+ if 0x0 <= third_entry <= 0x20:
+
+ if self.section == "Misc":
+ self.nb_unknowns = 10
+
+ else:
+ self.nb_unknowns = 6
+
+ def extract_struct_information(self, tss: FileIO):
+
+ #Test if first entries are unknown values to be kept
+ tss.seek(self.text_offset)
+ if self.pointer_offset == 75404:
+ t = 2
+ if tss.tell() <= (self.file_size - 12):
+ self.find_unknowns(tss)
+
+ tss.seek(self.text_offset)
+ for _ in range(self.nb_unknowns):
+ self.unknowns.append(tss.read_uint32())
+
+ if self.section == "Story" or (self.section == "NPC" and self.nb_unknowns > 0):
+
+ pointer_offset = tss.tell()
+ self.extract_speaker_information(tss, pointer_offset)
+ tss.seek(pointer_offset + 4)
+
+
+ self.extract_texts_information(tss)
+
+
+ def extract_speaker_information(self, tss:FileIO, pointer_offset:int):
+ offset = tss.read_uint32() + self.strings_offset
+ self.speaker = Speaker(pointer_offset, offset)
+ tss.seek(offset)
+ self.speaker.jap_text, self.speaker.bytes = bytes_to_text(tss, offset)
+
+ def extract_bubbles(self, text, entry_bytes:bytes):
+ bubble_id = 1
+ text_split = text.split('')
+ bytes_split = entry_bytes.split(b'\x0C')
+ bubble_list = []
+ for text, bubble_bytes in zip(text_split, bytes_split):
+ bubble = Bubble(id=bubble_id, jap_text=text, eng_text='', bytes=bubble_bytes)
+ bubble_list.append(bubble)
+ bubble_id += 1
+
+ return bubble_list
+ def extract_texts_information(self, tss:FileIO):
+ offset = tss.tell()
+
+ normal_text = False
+ sub_id = 1
+ if offset <= self.file_size-4:
+
+
+ val = tss.read_uint32()
+ if val + self.strings_offset < self.file_size:
+ while val != 0 and val != 0x1 and val < self.file_size:
+ pointer_offset = tss.tell()-4
+ text_offset = val + self.strings_offset
+ entry = StructEntry(pointer_offset, text_offset)
+ tss.seek(text_offset)
+ jap_text, bytes = bytes_to_text(tss, text_offset)
+
+
+ entry.bubble_list = self.extract_bubbles(jap_text, bytes)
+ entry.sub_id = sub_id
+ self.texts_entry.append(entry)
+ sub_id += 1
+
+ tss.seek(pointer_offset + 4)
+
+ if tss.tell() < self.file_size - 4:
+ val = tss.read_uint32()
+
+ if val == 0x1 or val == 0x0:
+ self.end_unknowns.append(val)
+ else:
+ break
+ else:
+ normal_text = True
+ else:
+ normal_text = True
+
+ if normal_text:
+ entry = StructEntry(self.pointer_offset, self.text_offset)
+ jap_text, bytes = bytes_to_text(tss, self.text_offset)
+ entry.bubble_list = self.extract_bubbles(jap_text, bytes)
+ self.texts_entry.append(entry)
+
+ def parse_xml_nodes(self, xml_nodes_list, list_status_insertion):
+ self.id = int(xml_nodes_list[0].find("Id").text)
+
+ max_sub_id = max([int(entry.find("SubId").text) for entry in xml_nodes_list])
+
+ for sub_id in range(1, max_sub_id + 1):
+
+ sub_nodes = [sub for sub in xml_nodes_list if int(sub.find('SubId').text) == sub_id]
+ max_bubble_id = max([int(entry.find('BubbleId').text) for entry in sub_nodes])
+ for bubble_id in range(1, max_bubble_id + 1):
+ bubble = [bubble for bubble in sub_nodes if int(bubble.find('BubbleId').text) == bubble_id][0]
+ entry_bytes, japanese_text, final_text, status = self.get_node_bytes(bubble, list_status_insertion, pad=False)
+
+ self.texts_entry[sub_id-1].bubble_list[bubble_id-1].jap_text = japanese_text
+ self.texts_entry[sub_id-1].bubble_list[bubble_id-1].eng_text = final_text
+ self.texts_entry[sub_id-1].bubble_list[bubble_id-1].status = status
+ self.texts_entry[sub_id-1].bubble_list[bubble_id-1].bytes = entry_bytes
+ def intersperse(self, my_list, item):
+ result = [item] * (len(my_list) * 2 - 1)
+ result[0::2] = my_list
+ return result
+
+ def adjust_bubble(self, entry):
+ buffer = b''
+ buffer_text = ''
+ nb_entries = len(entry.bubble_list)
+
+ for entry_node in entry.bubble_list:
+ bytes_text, text = self.get_node_bytes(entry_node.eng_text)
+ buffer += bytes_text
+ buffer_text += text
+
+ if nb_entries >= 2:
+ buffer += b'\x0C'
+ buffer_text += ''
+ return buffer, buffer_text
+
+ def pad(self, offset:int, nb_bytes):
+ rest = nb_bytes - offset % nb_bytes
+ return (b'\x00' * rest)
+
+ def add_speaker_entry(self, speaker_dict:dict, speaker_id:int):
+
+ speaker_found = [speaker for id, speaker in speaker_dict.items() if speaker.jap_text == self.speaker.jap_text]
+
+ if speaker_id == 16:
+ t =2
+
+ if len(speaker_found) > 0:
+ self.speaker = speaker_found[0]
+
+ if str(self.speaker.pointer_offset) not in speaker_found[0].pointer_offset_list:
+ speaker_found[0].pointer_offset_list.append(str(self.speaker.pointer_offset))
+ return speaker_id
+ else:
+ self.speaker.id = speaker_id
+ speaker_dict[speaker_id] = self.speaker
+ return speaker_id+1
+
+
+ def get_node_bytes(self, entry_node, list_status_insertion, pad=False) -> (bytes, str):
+
+ # Grab the fields from the Entry in the XML
+ #print(entry_node.find("JapaneseText").text)
+ status = entry_node.find("Status").text
+ japanese_text = entry_node.find("JapaneseText").text
+ english_text = entry_node.find("EnglishText").text
+
+ # Use the values only for Status = Done and use English if non-empty
+ final_text = ''
+ if (status in list_status_insertion):
+ final_text = english_text or ''
+ else:
+ final_text = japanese_text or ''
+
+ voiceid_node = entry_node.find("VoiceId")
+
+ if voiceid_node is not None:
+ final_text = f'<{voiceid_node.text}>' + final_text
+
+ # Convert the text values to bytes using TBL, TAGS, COLORS, ...
+ bytes_entry = text_to_bytes(final_text)
+
+ #Pad with 00
+ if pad:
+ rest = 4 - len(bytes_entry) % 4 - 1
+ bytes_entry += (b'\x00' * rest)
+
+ return bytes_entry, japanese_text, final_text, status
+
+ def write_entries(self, tss):
+
+ for entry in self.texts_entry:
+ bubble_bytes = [bubble.bytes for bubble in entry.bubble_list]
+ buffer = b''.join(self.intersperse(bubble_bytes, b'\x0C'))
+ entry.text_offset = tss.tell()
+ tss.write(buffer)
+ tss.write(b'\x00')
+
+ def write_unknowns(self, tss):
+ for val in self.unknowns:
+ tss.write(struct.pack(' 0:
+ tss.write(self.pad(tss.tell(), 4))
+ self.text_offset = tss.tell()
+ self.write_unknowns(tss)
+ self.write_speaker_pointer(tss, speaker_dict)
+ self.write_entry_pointers(tss)
+ self.write_end_unknowns(tss)
+
+ offset = tss.tell()
+ tss.seek(self.pointer_offset)
+ tss.write(struct.pack(')"
+HEX_TAG = r"(\{[0-9A-F]{2}\})"
+PRINTABLE_CHARS = "".join(
+ (string.digits, string.ascii_letters, string.punctuation, " ")
+ )
+jsonTblTags = dict()
+with open('../Tales-of-Hearts-DS/Project/tbl_all.json') as f:
+ jsonraw = json.loads(f.read(), encoding="utf-8")
+ for k, v in jsonraw.items():
+ jsonTblTags[k] = {int(k2, 16): v2 for k2, v2 in v.items()}
+
+ijsonTblTags = dict()
+for k, v in jsonTblTags.items():
+ if k in ['TAGS', 'TBL']:
+ ijsonTblTags[k] = {v2: k2 for k2, v2 in v.items()}
+ else:
+ ijsonTblTags[k] = {v2: hex(k2).replace('0x', '').upper() for k2, v2 in v.items()}
+iTags = {v2.upper(): k2 for k2, v2 in jsonTblTags['TAGS'].items()}
+def bytes_to_text(src: FileIO, offset: int = -1) -> (str, bytes):
+ finalText = ""
+ chars = jsonTblTags['TBL']
+
+ if (offset > 0):
+ src.seek(offset, 0)
+ buffer = []
+ while True:
+ b = src.read(1)
+
+ if b == b"\x00": break
+
+
+ b = ord(b)
+ buffer.append(b)
+
+ # Button
+ if b == 0x81:
+ next_b = src.read(1)
+ if ord(next_b) in jsonTblTags['BUTTON'].keys():
+ finalText += f"<{jsonTblTags['BUTTON'].get(ord(next_b))}>"
+ buffer.append(ord(next_b))
+ continue
+ else:
+ src.seek(src.tell( ) -1 ,0)
+
+
+
+ # Custom Encoded Text
+ if (0x80 <= b <= 0x9F) or (0xE0 <= b <= 0xEA):
+ v = src.read_uint8()
+ c = (b << 8) | v
+ buffer.append(v)
+ finalText += chars.get(c, "{%02X}{%02X}" % (c >> 8, c & 0xFF))
+ continue
+
+ if b == 0xA:
+ finalText += ("\n")
+ continue
+
+ # Voice Id
+ elif b in [0x9]:
+
+ val = ""
+ while src.read(1) != b"\x29":
+ src.seek(src.tell() - 1)
+ val += src.read(1).decode("cp932")
+
+ buffer.extend(list(val.encode("cp932")))
+ buffer.append(0x29)
+
+ val += ">"
+ val = val.replace('(', '<')
+
+ finalText += val
+ continue
+
+ # ASCII text
+ if chr(b) in PRINTABLE_CHARS:
+ finalText += chr(b)
+ continue
+
+ # cp932 text
+ if 0xA0 < b < 0xE0:
+ finalText += struct.pack("B", b).decode("cp932")
+ continue
+
+
+
+ if b in [0x3, 0x4, 0xB]:
+ b_value = b''
+
+ if ord(src.read(1) )== 0x28:
+ tag_name = jsonTblTags['TAGS'].get(b)
+ buffer.append(0x28)
+
+ b_v = b''
+ while b_v != b'\x29':
+ b_v = src.read(1)
+ b_value += b_v
+ b_value = b_value[:-1]
+ buffer.extend(list(b_value))
+ buffer.append(0x29)
+
+ parameter = int.from_bytes(b_value, "big")
+ tag_param = jsonTblTags.get(tag_name.upper(), {}).get(parameter, None)
+
+ if tag_param is not None:
+ finalText += f"<{tag_param}>"
+ else:
+ finalText += f"<{tag_name}:{parameter}>"
+
+ continue
+
+ if b == 0xC:
+
+ finalText += ""
+ continue
+
+ finalText += "{%02X}" % b
+
+ return finalText, bytes(buffer)
+
+
+def text_to_bytes(text:str):
+ multi_regex = (HEX_TAG + "|" + COMMON_TAG + r"|(\n)")
+ tokens = [sh for sh in re.split(multi_regex, text) if sh]
+ output = b''
+ for t in tokens:
+ # Hex literals
+ if re.match(HEX_TAG, t):
+ output += struct.pack("B", int(t[1:3], 16))
+
+ # Tags
+
+ elif re.match(COMMON_TAG, t):
+ tag, param, *_ = t[1:-1].split(":") + [None]
+
+ if tag == "icon":
+ output += struct.pack("B", ijsonTblTags["TAGS"].get(tag))
+ output += b'\x28' + struct.pack('B', int(param)) + b'\x29'
+
+ elif any(re.match(possible_value, tag) for possible_value in VALID_VOICEID):
+ output += b'\x09\x28' + tag.encode("cp932") + b'\x29'
+
+ elif tag == "Bubble":
+ output += b'\x0C'
+
+ else:
+ if tag in ijsonTblTags["TAGS"]:
+ output += struct.pack("B", ijsonTblTags["TAGS"][tag])
+ continue
+
+ for k, v in ijsonTblTags.items():
+ if tag in v:
+ if k in ['NAME', 'COLOR']:
+ output += struct.pack('B',iTags[k]) + b'\x28' + bytes.fromhex(v[tag]) + b'\x29'
+ break
+ else:
+ output += b'\x81' + bytes.fromhex(v[tag])
+
+ # Actual text
+ elif t == "\n":
+ output += b"\x0A"
+ else:
+ for c in t:
+ if c in PRINTABLE_CHARS or c == "\u3000":
+ output += c.encode("cp932")
+ else:
+
+ if c in ijsonTblTags["TBL"].keys():
+ b = ijsonTblTags["TBL"][c].to_bytes(2, 'big')
+ output += b
+ else:
+ output += c.encode("cp932")
+
+
+ return output
\ No newline at end of file
diff --git a/tools/pythonlib/formats/tss.py b/tools/pythonlib/formats/tss.py
new file mode 100644
index 0000000..cf59722
--- /dev/null
+++ b/tools/pythonlib/formats/tss.py
@@ -0,0 +1,297 @@
+from dataclasses import dataclass
+import struct
+from typing import Optional
+from .FileIO import FileIO
+from .structnode import StructNode, Speaker, Bubble
+import os
+from pathlib import Path
+import re
+import io
+import lxml.etree as etree
+import string
+import subprocess
+from itertools import groupby
+
+bytecode_dict = {
+ "Story": [b'\x0E\x10\x00\x0C\x04', b'\x00\x10\x00\x0C\x04'],
+ "NPC": [b'\x40\x00\x0C\x04', b'\x0E\x00\x00\x82\x02'],
+ "Misc": [b'\x00\x00\x00\x82\x02', b'\x01\x00\x00\x82\x02', b'\x00\xA3\x04']
+}
+
+
+class Tss():
+ def __init__(self, path:Path, list_status_insertion) -> None:
+ self.align = False
+ self.files = []
+ self.file_size = os.path.getsize(path)
+ self.offsets_used = []
+ self.struct_dict = {}
+ self.speaker_dict = {}
+
+ self.id = 1
+ self.struct_id = 1
+ self.speaker_id = 1
+ self.root = etree.Element('SceneText')
+ self.list_status_insertion = list_status_insertion
+ self.VALID_VOICEID = [r'()', r'()', r'()', r'()']
+ self.COMMON_TAG = r"(<[\w/]+:?\w+>)"
+ self.HEX_TAG = r"(\{[0-9A-F]{2}\})"
+ self.PRINTABLE_CHARS = "".join(
+ (string.digits, string.ascii_letters, string.punctuation, " ")
+ )
+
+ with FileIO(path) as tss_f:
+
+ tss_f.read(12)
+ self.strings_offset = struct.unpack(' 0:
+ text_offset = struct_found[0].speaker.text_offset
+ jap_text = self.bytes_to_text(f, text_offset)
+ return jap_text
+ else:
+ return 'Variable'
+
+
+ def add_offset_adjusted(self, start, end):
+ self.offsets_used.extend(list(range(start, end)))
+
+ def create_first_nodes(self):
+
+ story_count = len([node for node in self.struct_dict.values() if node.section == "Story"])
+ npc_count = len([node for node in self.struct_dict.values() if node.section == "NPC"])
+ string_count = len([node for node in self.struct_dict.values() if node.section == "Misc"])
+
+ if story_count >0 or npc_count > 0:
+ speakers_node = etree.SubElement(self.root, 'Speakers')
+ etree.SubElement(speakers_node, 'Section').text = "Speaker"
+
+ if story_count > 0:
+ story_node = etree.SubElement(self.root, 'Strings')
+ etree.SubElement(story_node, 'Section').text = "Story"
+
+ if npc_count > 0:
+ npc_node = etree.SubElement(self.root, 'Strings')
+ etree.SubElement(npc_node, 'Section').text = "NPC"
+
+ #if string_count > 0:
+ # string_node = etree.SubElement(self.root, 'StringsMisc')
+ # etree.SubElement(string_node, 'Section').text = "Misc"
+ def extract_to_xml(self, original_path, translated_path, keep_translations=False):
+
+ self.create_first_nodes()
+
+ self.add_speaker_nodes()
+
+ #Split with bubble the entries
+ for pointer_offset in sorted(self.struct_dict.keys()):
+ struct_obj = self.struct_dict[pointer_offset]
+
+ if struct_obj.section != 'Misc':
+ for struct_text in struct_obj.texts_entry:
+ for bubble in struct_text.bubble_list:
+ self.create_entry(struct_node=struct_obj, bubble=bubble, subid=struct_text.sub_id)
+
+
+
+ # Write the original data into XML
+ txt = etree.tostring(self.root, encoding="UTF-8", pretty_print=True)
+ with open(original_path, "wb") as xmlFile:
+ xmlFile.write(txt)
+
+ if keep_translations:
+ self.copy_translations(original_path, translated_path)
+
+ def add_speaker_nodes(self):
+ speakers = self.root.find('Speakers')
+
+ for speaker_id, speaker_node in self.speaker_dict.items():
+ entry_node = etree.SubElement(speakers, "Entry")
+ etree.SubElement(entry_node, "PointerOffset")
+ etree.SubElement(entry_node, "JapaneseText").text = speaker_node.jap_text
+ etree.SubElement(entry_node, "EnglishText")
+ etree.SubElement(entry_node, "Notes")
+ etree.SubElement(entry_node, "Id").text = str(speaker_id)
+ etree.SubElement(entry_node, "Status").text = 'To Do'
+
+ def create_entry(self, struct_node:StructNode, bubble:Bubble, subid=None):
+
+ # Add it to the XML node
+ if struct_node.section == 'Misc':
+ strings_node = self.root.find(f'StringsMisc[Section="Misc"]')
+ else:
+ strings_node = self.root.find(f'Strings[Section="{struct_node.section}"]')
+
+ entry_node = etree.SubElement(strings_node, "Entry")
+ etree.SubElement(entry_node, "PointerOffset").text = str(struct_node.pointer_offset)
+ text_split = re.split(self.COMMON_TAG, bubble.jap_text)
+
+ if len(text_split) > 1 and any(re.match(possible_value, bubble.jap_text) for possible_value in self.VALID_VOICEID):
+ etree.SubElement(entry_node, "VoiceId").text = text_split[1].replace('<','').replace('>','')
+ etree.SubElement(entry_node, "JapaneseText").text = ''.join(text_split[2:])
+ else:
+ etree.SubElement(entry_node, "JapaneseText").text = bubble.jap_text
+
+ etree.SubElement(entry_node, "EnglishText")
+ etree.SubElement(entry_node, "Notes")
+
+ if int(struct_node.speaker.id) > 0:
+ etree.SubElement(entry_node, "SpeakerId").text = str(struct_node.speaker.id)
+ etree.SubElement(entry_node, "Id").text = str(struct_node.id)
+
+ if struct_node.section in ["Story", "NPC"]:
+ etree.SubElement(entry_node, "BubbleId").text = str(bubble.id)
+ etree.SubElement(entry_node, "SubId").text = str(subid)
+
+ etree.SubElement(entry_node, "Status").text = "To Do"
+ self.id += 1
+
+ def find_xml_nodes(self, pointer_offset:int):
+ node_found = [struct_node for key, struct_node in self.struct_dict.items()
+ if struct_node.pointer_offset == pointer_offset]
+
+ if len(node_found) > 0:
+ return node_found
+
+ def parse_write_speakers(self, tss:io.BytesIO):
+
+ for speaker_node in self.root.findall("Speakers/Entry"):
+
+
+
+ text_offset = tss.tell()
+ speaker = Speaker(pointer_offset=0, text_offset=text_offset)
+ speaker.set_node_attributes(speaker_node, self.list_status_insertion)
+ speaker.text_offset = tss.tell()
+ self.speaker_dict[speaker.id] = speaker
+ tss.write(speaker.bytes)
+ tss.write(b'\x00')
+
+ def parse_xml_infos(self):
+
+ struct_entries = self.root.findall('Strings/Entry')
+ pointers_list = list(set([int(entry.find("PointerOffset").text) for entry in struct_entries]))
+
+ pointers_list.sort()
+
+ for pointer_offset in pointers_list:
+ entries = [entry for entry in struct_entries if int(entry.find("PointerOffset").text) == pointer_offset]
+ self.struct_dict[pointer_offset].parse_xml_nodes(entries, self.list_status_insertion)
+
+ def copy_translations(self, original_path:Path, translated_path:Path):
+
+ #Open translated XMLs
+ tree = etree.parse(original_path)
+ root_original = tree.getroot()
+
+ tree = etree.parse(translated_path)
+ root_translated = tree.getroot()
+ translated_speakers = {int(entry.find('Id').text):entry for entry in root_translated.findall('Speakers/Entry')
+ if entry.find("Status").text in ['Proofread', 'Edited', 'Done']}
+ translated_entries = {(int(entry.find('PointerOffset').text),
+ int(entry.find("SubId").text),
+ int(entry.find("BubbleId").text)):entry for entry in root_translated.findall('Strings/Entry')
+ if entry.find("Status").text in ['Proofread', 'Edited', 'Done']}
+ original_entries = {(int(entry.find('PointerOffset').text),
+ int(entry.find("SubId").text),
+ int(entry.find("BubbleId").text)): entry for entry in
+ root_original.findall('Strings/Entry')}
+
+ #Speakers
+ for speaker_entry in [entry for entry in root_original.findall('SceneText/Speakers/Entry')
+ if int(entry.find('Id').text) in translated_speakers.keys()]:
+
+ speaker_id = int(speaker_entry.find("Id").text)
+ speaker_entry.find("EnglishText").text = translated_speakers[speaker_id].find('EnglishText').text
+ speaker_entry.find("Status").text = translated_speakers[speaker_id].find('Status').text
+ notes = translated_speakers[speaker_id].find('Notes')
+
+ if notes is not None:
+ speaker_entry.find("Notes").text = translated_speakers[speaker_id].find('Notes').text
+
+
+ #Main entries
+ original_entries_translated = {(pointer_offset, sub_id, bubble_id):node for (pointer_offset, sub_id, bubble_id), node in original_entries.items()
+ if (pointer_offset, sub_id, bubble_id) in translated_entries.keys()}
+ for (pointer_offset, sub_id, bubble_id), main_entry in original_entries_translated.items():
+ translated_entry = translated_entries[(pointer_offset, sub_id, bubble_id)]
+
+ main_entry.find("EnglishText").text = translated_entry.find('EnglishText').text
+ main_entry.find("Status").text = translated_entry.find('Status').text
+ notes = translated_entry.find('Notes').text
+
+ if notes is not None:
+ main_entry.find("Notes").text = translated_entries[pointer_offset].find('Notes').text
+
+ txt = etree.tostring(root_original, encoding="UTF-8", pretty_print=True)
+ with open(translated_path, "wb") as xmlFile:
+ xmlFile.write(txt)
+
+
+ def pack_tss_file(self, destination_path:Path, xml_path:Path):
+
+
+
+ # Grab the Tss file inside the folder
+ with FileIO(destination_path, 'rb') as original_tss:
+ data = original_tss.read()
+ tss = io.BytesIO(data)
+ tree = etree.parse(xml_path)
+ self.root = tree.getroot()
+
+ start_offset = self.get_starting_offset()
+ tss.seek(start_offset, 0)
+ self.parse_write_speakers(tss)
+ self.parse_xml_infos()
+
+ #Insert all nodes
+ [node.pack_node(tss, self.speaker_dict) for pointer_offset, node in self.struct_dict.items()]
+
+ #Update TSS
+ with FileIO(destination_path, 'wb') as f:
+ f.write(tss.getvalue())
+
+ def get_starting_offset(self):
+ speaker_offset = min([ele.speaker.text_offset for pointer_offset, ele in self.struct_dict.items() if ele.speaker.text_offset > 0], default=None)
+
+ texts_offset = [text_entry.text_offset for key, struct_entry in self.struct_dict.items() for text_entry in struct_entry.texts_entry]
+ texts_offset.sort()
+ text_offset = min(texts_offset[1:], default=None)
+ return min(speaker_offset or 100000000, text_offset)
\ No newline at end of file
diff --git a/tools/pythonlib/tests/.gitignore b/tools/pythonlib/tests/.gitignore
new file mode 100644
index 0000000..bbb124e
--- /dev/null
+++ b/tools/pythonlib/tests/.gitignore
@@ -0,0 +1,173 @@
+*.bin
+.vscode/*
+
+# Local History for Visual Studio Code
+.history/
+
+# Built Visual Studio Code Extensions
+*.vsix
+
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+*.idea
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+# For a library or package, you might want to ignore these files since the code is
+# intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# poetry
+# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
+# This is especially recommended for binary packages to ensure reproducibility, and is more
+# commonly ignored for libraries.
+# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
+#poetry.lock
+
+# pdm
+# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
+#pdm.lock
+# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
+# in version control.
+# https://pdm.fming.dev/#use-with-ide
+.pdm.toml
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
+# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
+# and can be added to the global gitignore or merged into this file. For a more nuclear
+# option (not recommended) you can uncomment the following to ignore the entire idea folder.
+#.idea/
+TOR_Batch/Extract_Iso.bat
+TOR_Batch/Insert_Iso - only changed.bat
+TOR_Batch/Insert_Iso.bat
diff --git a/tools/pythonlib/tests/TOH/__init__.py b/tools/pythonlib/tests/TOH/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tools/pythonlib/tests/TOH/files/AMUI01P.SCP b/tools/pythonlib/tests/TOH/files/AMUI01P.SCP
new file mode 100644
index 0000000..1d266f6
Binary files /dev/null and b/tools/pythonlib/tests/TOH/files/AMUI01P.SCP differ
diff --git a/tools/pythonlib/tests/TOH/files/AMUT01P.SCP b/tools/pythonlib/tests/TOH/files/AMUT01P.SCP
new file mode 100644
index 0000000..dc9d5dc
Binary files /dev/null and b/tools/pythonlib/tests/TOH/files/AMUT01P.SCP differ
diff --git a/tools/pythonlib/tests/TOH/files/FSHT00.SCP b/tools/pythonlib/tests/TOH/files/FSHT00.SCP
new file mode 100644
index 0000000..ab97454
Binary files /dev/null and b/tools/pythonlib/tests/TOH/files/FSHT00.SCP differ
diff --git a/tools/pythonlib/tests/TOH/files/LITD04P.SCP b/tools/pythonlib/tests/TOH/files/LITD04P.SCP
new file mode 100644
index 0000000..a4b6d54
Binary files /dev/null and b/tools/pythonlib/tests/TOH/files/LITD04P.SCP differ
diff --git a/tools/pythonlib/tests/TOH/files/STRT00.SCP b/tools/pythonlib/tests/TOH/files/STRT00.SCP
new file mode 100644
index 0000000..1e82a16
Binary files /dev/null and b/tools/pythonlib/tests/TOH/files/STRT00.SCP differ
diff --git a/tools/pythonlib/tests/TOH/files/VOLD01P.SCP b/tools/pythonlib/tests/TOH/files/VOLD01P.SCP
new file mode 100644
index 0000000..a61aca6
Binary files /dev/null and b/tools/pythonlib/tests/TOH/files/VOLD01P.SCP differ
diff --git a/tools/pythonlib/tests/TOH/test_structs.py b/tools/pythonlib/tests/TOH/test_structs.py
new file mode 100644
index 0000000..1c0b84a
--- /dev/null
+++ b/tools/pythonlib/tests/TOH/test_structs.py
@@ -0,0 +1,196 @@
+from pathlib import Path
+import os
+import pytest
+import pyjson5 as json
+from pythonlib.formats.structnode import StructNode, StructEntry
+from pythonlib.formats.FileIO import FileIO
+
+#NPC Struct with 6 Unknowns
+#More than 2 entries + End Unknown = 0x0
+def test_struct_npc1(npc_struct1):
+ assert npc_struct1.nb_unknowns == 6
+ assert npc_struct1.speaker.jap_text != ""
+ assert len(npc_struct1.end_unknowns) == 1
+ assert len(npc_struct1.texts_entry) == 8
+
+#Story Struct
+def test_struct_story1(story_struct1):
+ assert story_struct1.nb_unknowns == 2
+ assert story_struct1.speaker.jap_text != ""
+ assert len(story_struct1.end_unknowns) == 0
+ assert len(story_struct1.texts_entry) == 1
+
+#Story Struct2
+def test_struct_story2(story_struct2):
+ assert story_struct2.nb_unknowns == 2
+ assert story_struct2.speaker.jap_text != ""
+ assert len(story_struct2.end_unknowns) == 0
+ assert len(story_struct2.texts_entry) == 1
+
+#String Struct Normal
+def test_struct_string1(string_struct1):
+ assert string_struct1.nb_unknowns == 0
+ assert string_struct1.speaker.jap_text == "Variable"
+ assert len(string_struct1.end_unknowns) == 0
+ assert len(string_struct1.texts_entry) == 1
+
+def test_string_end_of_file(string_end_of_file1, string_end_of_file2):
+ assert string_end_of_file1.nb_unknowns == 0
+ assert string_end_of_file1.speaker.jap_text == "Variable"
+ assert len(string_end_of_file1.end_unknowns) == 0
+ assert len(string_end_of_file1.texts_entry) == 1
+
+ assert string_end_of_file2.nb_unknowns == 0
+ assert string_end_of_file2.speaker.jap_text == "Variable"
+ assert len(string_end_of_file2.end_unknowns) == 0
+ assert len(string_end_of_file2.texts_entry) == 1
+
+def test_string_struct_weird(string_struct_weird):
+ assert string_struct_weird.nb_unknowns == 0
+ assert string_struct_weird.speaker.jap_text == "Variable"
+ assert len(string_struct_weird.texts_entry) == 10
+
+@pytest.fixture
+def npc_struct1():
+ text_offset = 0x1ECA8
+ path = Path('./pythonlib/tests/TOH/files/FSHT00.SCP')
+ file_size = os.path.getsize(path)
+ section = 'NPC'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
+
+
+@pytest.fixture
+def story_struct1():
+ text_offset = 0x1FBF8
+ path = Path('./pythonlib/tests/TOH/files/FSHT00.SCP')
+ file_size = os.path.getsize(path)
+ section = 'Story'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
+
+@pytest.fixture
+def story_struct2():
+ text_offset = 0x14158
+ path = Path('./pythonlib/tests/TOH/files/VOLD01P.SCP')
+ file_size = os.path.getsize(path)
+ section = 'Story'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
+
+@pytest.fixture
+def string_struct1():
+ text_offset = 0x1C540
+ path = Path('./pythonlib/tests/TOH/files/AMUT01P.SCP')
+ file_size = os.path.getsize(path)
+ section = 'Misc'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
+
+@pytest.fixture
+def string_struct2():
+ path = Path('./pythonlib/tests/TOH/files/STRT00.SCP')
+ file_size = os.path.getsize(path)
+ text_offset = 0x14DE8
+ section = 'Story'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
+
+@pytest.fixture
+def string_end_of_file1():
+ text_offset = 0x1C5F5
+ path = Path('./pythonlib/tests/TOH/files/AMUT01P.SCP')
+ file_size = os.path.getsize(path)
+ section = 'Misc'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
+
+@pytest.fixture
+def string_end_of_file2():
+ text_offset = 0x9D14
+ path = Path('./pythonlib/tests/TOH/files/LITD04P.SCP')
+ file_size = os.path.getsize(path)
+ section = 'Misc'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
+
+@pytest.fixture
+def string_struct_weird():
+ text_offset = 0x9CEC
+ path = Path('./pythonlib/tests/TOH/files/LITD04P.SCP')
+ file_size = os.path.getsize(path)
+ section = 'Misc'
+
+ with FileIO(path) as tss:
+ tss.seek(0xC)
+ strings_offset = tss.read_uint32()
+
+ node = StructNode(id=0, pointer_offset=0,
+ text_offset=text_offset,
+ tss=tss, strings_offset=strings_offset, file_size=file_size,
+ section=section)
+
+ return node
\ No newline at end of file
diff --git a/tools/pythonlib/tests/TOH/test_tags.json b/tools/pythonlib/tests/TOH/test_tags.json
new file mode 100644
index 0000000..54e8780
--- /dev/null
+++ b/tools/pythonlib/tests/TOH/test_tags.json
@@ -0,0 +1,22 @@
+{
+ "03 28 30 29 82 C6": "と",
+ "82 CD 03 28 33 29 83 4A 83 69": "はカナ",
+ "03 28 35 29 83 54 83 93 83 53 91 44 92 B7 03 28 30 29 82 CC": "サンゴ船長の",
+ "04 28 4B 4F 48 29 82 C9 82 E2 82 C1 82 C6 89 EF": "にやっと会",
+ "04 28 53 49 4E 29 82 C6 04 28 4B 4F 48 29 82 AA": "とが",
+ "04 28 48 49 53 29 82 C6": "と",
+ "81 BE 82 C5 83 60": "でチ",
+ "0B 28 32 29 81 7B 81 BE 82 C5": "+で",
+ "81 BA": "",
+ "09 28 56 53 4D 5F 32 34 5F 30 32 30 5F 30 30 35 29": "",
+ "09 28 56 43 54 5F 30 31 5F 30 30 31 5F 30 30 31 29 82 A4": "う",
+ "09 28 43 30 32 30 35 29 82 E0": "も",
+ "09 28 53 30 37 31 34 31 30 29 89 BB": "化",
+ "90 6C 0C 09 28 53 30 37 31 34 31 31 29 8E E5": "人主",
+ "0A 8E E5": "\n主",
+ "62 65 72 79 6C 30 30 5F 65 30 31 33": "beryl00_e013",
+ "25 73 20 25 64 20 83 4B 83 8B 83 68": "%s %d ガルド",
+ "28 65 39 39 66 30 30 29 92 B9": "(e99f00)鳥",
+ "2F 62 74 6C 2F 63 6F 6D 6D 6F 6E 2F 73 68 61 64 6F 77 2E 64 73 33": "/btl/common/shadow.ds3",
+ "0E 01 0F 01 14 01 15 01 16 01 17 01 18 01 19 01 1A 01 1B 01 1C 01 1D 01 1E 01 0E": "{0E}{01}{0F}{01}{14}{01}{15}{01}{16}{01}{17}{01}{18}{01}{19}{01}{1A}{01}{1B}{01}{1C}{01}{1D}{01}{1E}{01}{0E}"
+}
\ No newline at end of file
diff --git a/tools/pythonlib/tests/TOH/test_tags.py b/tools/pythonlib/tests/TOH/test_tags.py
new file mode 100644
index 0000000..17a7862
--- /dev/null
+++ b/tools/pythonlib/tests/TOH/test_tags.py
@@ -0,0 +1,128 @@
+import os
+from pathlib import Path
+from pythonlib.games import ToolsTOH
+from pythonlib.formats.FileIO import FileIO
+from pythonlib.formats.text_toh import bytes_to_text, text_to_bytes
+import pytest
+import pyjson5 as json
+import pdb
+from io import BytesIO
+base_path = Path('pythonlib/tests/TOH')
+
+
+@pytest.mark.parametrize("n", [0,1,2])
+def test_colors_to_text(config_bytes, n):
+ input = config_bytes[n]['byte']
+ input_bytes = bytes.fromhex(input.replace(' ',''))
+ with FileIO(base_path / 'colors_to_text.bin', 'wb+') as f:
+ f.write(input_bytes)
+ f.write(b'\x00')
+ f.seek(0)
+ expected = config_bytes[n]['text']
+ res, buffer = bytes_to_text(f,0)
+ assert res == expected
+ assert buffer == input_bytes
+
+
+@pytest.mark.parametrize("n", [3,4,5])
+def test_names_to_text(config_bytes, n):
+ input = config_bytes[n]['byte']
+ input_bytes = bytes.fromhex(input.replace(' ', ''))
+ with FileIO(base_path / 'names_to_text.bin', 'wb+') as f:
+ f.write(input_bytes)
+ f.write(b'\x00')
+ f.seek(0)
+ expected = config_bytes[n]['text']
+ res, buffer = bytes_to_text(f, 0)
+ assert res == expected
+ assert buffer == input_bytes
+
+
+@pytest.mark.parametrize("n", [6,7,8])
+def test_buttons_to_text(config_bytes, n):
+ input = config_bytes[n]['byte']
+ input_bytes = bytes.fromhex(input.replace(' ', ''))
+ with FileIO(base_path / 'buttons_to_text.bin', 'wb+') as f:
+ f.write(input_bytes)
+ f.write(b'\x00')
+ f.seek(0)
+ expected = config_bytes[n]['text']
+ res, buffer = bytes_to_text(f, 0)
+ assert res == expected
+ assert buffer == input_bytes
+
+
+@pytest.mark.parametrize("n", [9,10,11,12,13,14])
+def test_voiceid_to_text(config_bytes, n):
+ input = config_bytes[n]['byte']
+ input_bytes = bytes.fromhex(input.replace(' ', ''))
+ with FileIO(base_path / 'voice_to_text.bin', 'wb+') as f:
+ f.write(input_bytes)
+ f.write(b'\x00')
+ f.seek(0)
+ expected = config_bytes[n]['text']
+ res, buffer = bytes_to_text(f, 0)
+ assert res == expected
+ assert buffer == input_bytes
+
+
+@pytest.mark.parametrize("n", [0,1,2])
+def test_colors_to_bytes(config_text, n):
+ input = config_text[n]['text']
+ expected = bytes.fromhex(config_text[n]['byte'].replace(' ',''))
+ res = text_to_bytes(input)
+ assert res.hex() == expected.hex()
+
+
+@pytest.mark.parametrize("n", [3,4,5])
+def test_names_to_bytes(config_text, n):
+ input = config_text[n]['text']
+ expected = bytes.fromhex(config_text[n]['byte'].replace(' ',''))
+ res = text_to_bytes(input)
+ assert res.hex() == expected.hex()
+
+@pytest.mark.parametrize("n", [6,7,8])
+def test_buttons_to_bytes(config_text, n):
+ input = config_text[n]['text']
+ expected = bytes.fromhex(config_text[n]['byte'].replace(' ',''))
+ res = text_to_bytes(input)
+ assert res.hex() == expected.hex()
+
+@pytest.mark.parametrize("n", [9,10,11,12,13,14])
+def test_voiceid_to_bytes(config_text, n):
+ input = config_text[n]['text']
+ expected = bytes.fromhex(config_text[n]['byte'].replace(' ',''))
+ res = text_to_bytes(input)
+ assert res.hex() == expected.hex()
+
+@pytest.mark.parametrize("n", [15,16,17,18,19])
+def test_misc(config_text, n):
+ input = config_text[n]['text']
+ expected = bytes.fromhex(config_text[n]['byte'].replace(' ',''))
+ res = text_to_bytes(input)
+ assert res.hex() == expected.hex()
+
+
+
+@pytest.fixture
+def tales_instance():
+ insert_mask = [
+ '--with-editing', '--with-proofreading'
+ ]
+ tales_instance = ToolsTOH.ToolsTOH(
+ project_file=Path(os.path.join(os.getcwd(), '..','TOH', 'project.json')),
+ insert_mask=insert_mask,
+ changed_only=False)
+ return tales_instance
+
+@pytest.fixture
+def config_bytes():
+ with open(base_path / 'test_tags.json', encoding='utf-8') as f:
+ json_data = json.load(f)
+ return [{"byte":k, "text":json_data[k]} for k in json_data.keys()]
+
+@pytest.fixture
+def config_text():
+ with open(base_path / 'test_tags.json', encoding='utf-8') as f:
+ json_data = json.load(f)
+ return [{"text":json_data[k], "byte":k} for k in json_data.keys()]
diff --git a/tools/pythonlib/tests/__init__.py b/tools/pythonlib/tests/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tools/pythonlib/utils/__init__.py b/tools/pythonlib/utils/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/tools/pythonlib/utils/blz.exe b/tools/pythonlib/utils/blz.exe
new file mode 100644
index 0000000..680c933
Binary files /dev/null and b/tools/pythonlib/utils/blz.exe differ
diff --git a/tools/pythonlib/utils/comptolib.dll b/tools/pythonlib/utils/comptolib.dll
new file mode 100644
index 0000000..28790b1
Binary files /dev/null and b/tools/pythonlib/utils/comptolib.dll differ
diff --git a/tools/pythonlib/utils/comptolib.py b/tools/pythonlib/utils/comptolib.py
new file mode 100644
index 0000000..ccc38c3
--- /dev/null
+++ b/tools/pythonlib/utils/comptolib.py
@@ -0,0 +1,174 @@
+import ctypes
+import struct
+from pathlib import Path
+
+# Error codes
+SUCCESS = 0
+ERROR_FILE_IN = -1
+ERROR_FILE_OUT = -2
+ERROR_MALLOC = -3
+ERROR_BAD_INPUT = -4
+ERROR_UNKNOWN_VERSION = -5
+ERROR_FILES_MISMATCH = -6
+
+
+class ComptoFileInputError(Exception):
+ pass
+
+
+class ComptoFileOutputError(Exception):
+ pass
+
+
+class ComptoMemoryAllocationError(Exception):
+ pass
+
+
+class ComptoBadInputError(Exception):
+ pass
+
+
+class ComptoUnknownVersionError(Exception):
+ pass
+
+
+class ComptoMismatchedFilesError(Exception):
+ pass
+
+
+class ComptoUnknownError(Exception):
+ pass
+
+
+def RaiseError(error: int):
+ if error == SUCCESS:
+ return
+ elif error == ERROR_FILE_IN:
+ raise ComptoFileInputError("Error with input file")
+ elif error == ERROR_FILE_OUT:
+ raise ComptoFileOutputError("Error with output file")
+ elif error == ERROR_MALLOC:
+ raise ComptoMemoryAllocationError("Malloc failure")
+ elif error == ERROR_BAD_INPUT:
+ raise ComptoBadInputError("Bad Input")
+ elif error == ERROR_UNKNOWN_VERSION:
+ raise ComptoUnknownVersionError("Unknown version")
+ elif error == ERROR_FILES_MISMATCH:
+ raise ComptoMismatchedFilesError("Mismatch")
+ else:
+ raise ComptoUnknownError("Unknown error")
+
+
+comptolib_path = Path(__file__).parent / "comptolib.dll"
+
+comptolib = ctypes.cdll.LoadLibrary(str(comptolib_path))
+compto_decode = comptolib.Decode
+compto_decode.argtypes = (
+ ctypes.c_int,
+ ctypes.c_void_p,
+ ctypes.c_int,
+ ctypes.c_void_p,
+ ctypes.POINTER(ctypes.c_uint),
+)
+compto_decode.restype = ctypes.c_int
+
+compto_encode = comptolib.Encode
+compto_encode.argtypes = (
+ ctypes.c_int,
+ ctypes.c_void_p,
+ ctypes.c_int,
+ ctypes.c_void_p,
+ ctypes.POINTER(ctypes.c_uint),
+)
+compto_encode.restype = ctypes.c_int
+
+compto_fdecode = comptolib.DecodeFile
+compto_fdecode.argtypes = ctypes.c_char_p, ctypes.c_char_p, ctypes.c_int, ctypes.c_int
+compto_fdecode.restype = ctypes.c_int
+
+compto_fencode = comptolib.EncodeFile
+compto_fencode.argtypes = ctypes.c_char_p, ctypes.c_char_p, ctypes.c_int, ctypes.c_int
+compto_fencode.restype = ctypes.c_int
+
+
+class ComptoFile:
+ def __init__(self, type: int, data: bytes) -> None:
+ self.type = type
+ self.data = data
+
+ def __getitem__(self, item):
+ return self.data[item]
+
+ def __len__(self):
+ return len(self.data)
+
+
+def compress_data(input: bytes, raw: bool = False, version: int = 3) -> bytes:
+ input_size = len(input)
+ output_size = ((input_size * 9) // 8) + 10
+ output = b"\x00" * output_size
+ output_size = ctypes.c_uint(output_size)
+ error = compto_encode(version, input, input_size, output, ctypes.byref(output_size))
+ RaiseError(error)
+
+ if not raw:
+ output = (
+ struct.pack(" bytes:
+ if raw:
+ input_size = len(input)
+ output_size = input_size * 10
+ else:
+ version = struct.unpack(" None:
+ error = compto_fencode(input.encode("utf-8"), output.encode("utf-8"), raw, version)
+ RaiseError(error)
+
+
+def decompress_file(
+ input: str, output: str, raw: bool = False, version: int = 3
+) -> None:
+ error = compto_fdecode(input.encode("utf-8"), output.encode("utf-8"), raw, version)
+ RaiseError(error)
+
+
+def is_compressed(data: bytes) -> bool:
+ if len(data) < 0x09:
+ return False
+
+ if data[0] not in (0, 1, 3):
+ return False
+
+ expected_size = struct.unpack(". --*/
+/*----------------------------------------------------------------------------*/
+
+/*----------------------------------------------------------------------------*/
+#include
+#include
+#include
+
+/*----------------------------------------------------------------------------*/
+#define CMD_DECODE 0x00 // decode
+#define CMD_CODE_10 0x10 // LZSS magic number
+
+#define LZS_NORMAL 0x00 // normal mode, (0)
+#define LZS_FAST 0x80 // fast mode, (1 << 7)
+#define LZS_BEST 0x40 // best mode, (1 << 6)
+
+#define LZS_WRAM 0x00 // VRAM not compatible (LZS_WRAM | LZS_NORMAL)
+#define LZS_VRAM 0x01 // VRAM compatible (LZS_VRAM | LZS_NORMAL)
+#define LZS_WFAST 0x80 // LZS_WRAM fast (LZS_WRAM | LZS_FAST)
+#define LZS_VFAST 0x81 // LZS_VRAM fast (LZS_VRAM | LZS_FAST)
+#define LZS_WBEST 0x40 // LZS_WRAM best (LZS_WRAM | LZS_BEST)
+#define LZS_VBEST 0x41 // LZS_VRAM best (LZS_VRAM | LZS_BEST)
+
+#define LZS_SHIFT 1 // bits to shift
+#define LZS_MASK 0x80 // bits to check:
+ // ((((1 << LZS_SHIFT) - 1) << (8 - LZS_SHIFT)
+
+#define LZS_THRESHOLD 2 // max number of bytes to not encode
+#define LZS_N 0x1000 // max offset (1 << 12)
+#define LZS_F 0x12 // max coded ((1 << 4) + LZS_THRESHOLD)
+#define LZS_NIL LZS_N // index for root of binary search trees
+
+#define RAW_MINIM 0x00000000 // empty file, 0 bytes
+#define RAW_MAXIM 0x00FFFFFF // 3-bytes length, 16MB - 1
+
+#define LZS_MINIM 0x00000004 // header only (empty RAW file)
+#define LZS_MAXIM 0x01400000 // 0x01200003, padded to 20MB:
+ // * header, 4
+ // * length, RAW_MAXIM
+ // * flags, (RAW_MAXIM + 7) / 8
+ // 4 + 0x00FFFFFF + 0x00200000 + padding
+
+/*----------------------------------------------------------------------------*/
+unsigned char ring[LZS_N + LZS_F - 1];
+int dad[LZS_N + 1], lson[LZS_N + 1], rson[LZS_N + 1 + 256];
+int pos_ring, len_ring, lzs_vram;
+
+/*----------------------------------------------------------------------------*/
+#define BREAK(text) { printf(text); return; }
+#define EXIT(text) { printf(text); exit(-1); }
+
+/*----------------------------------------------------------------------------*/
+void Title(void);
+void Usage(void);
+
+char *Load(char *filename, int *length, int min, int max);
+void Save(char *filename, char *buffer, int length);
+char *Memory(int length, int size);
+
+void LZS_Decode(char *filename);
+void LZS_Encode(char *filename, int mode);
+char *LZS_Code(unsigned char *raw_buffer, int raw_len, int *new_len, int best);
+
+char *LZS_Fast(unsigned char *raw_buffer, int raw_len, int *new_len);
+void LZS_InitTree(void);
+void LZS_InsertNode(int r);
+void LZS_DeleteNode(int p);
+
+/*----------------------------------------------------------------------------*/
+int main(int argc, char **argv) {
+ int cmd, mode;
+ int arg;
+
+ Title();
+
+ if (argc < 2) Usage();
+ if (!strcmpi(argv[1], "-d")) { cmd = CMD_DECODE; }
+ else if (!strcmpi(argv[1], "-evn")) { cmd = CMD_CODE_10; mode = LZS_VRAM; }
+ else if (!strcmpi(argv[1], "-ewn")) { cmd = CMD_CODE_10; mode = LZS_WRAM; }
+ else if (!strcmpi(argv[1], "-evf")) { cmd = CMD_CODE_10; mode = LZS_VFAST; }
+ else if (!strcmpi(argv[1], "-ewf")) { cmd = CMD_CODE_10; mode = LZS_WFAST; }
+ else if (!strcmpi(argv[1], "-evo")) { cmd = CMD_CODE_10; mode = LZS_VBEST; }
+ else if (!strcmpi(argv[1], "-ewo")) { cmd = CMD_CODE_10; mode = LZS_WBEST; }
+ else EXIT("Command not supported\n");
+ if (argc < 3) EXIT("Filename not specified\n");
+
+ switch (cmd) {
+ case CMD_DECODE:
+ for (arg = 2; arg < argc; arg++) LZS_Decode(argv[arg]);
+ break;
+ case CMD_CODE_10:
+ for (arg = 2; arg < argc; arg++) LZS_Encode(argv[arg], mode);
+ break;
+ default:
+ break;
+ }
+
+ printf("\nDone\n");
+
+ return(0);
+}
+
+/*----------------------------------------------------------------------------*/
+void Title(void) {
+ printf(
+ "\n"
+ "LZSS - (c) CUE 2011\n"
+ "LZSS coding for Nintendo GBA/DS\n"
+ "\n"
+ );
+}
+
+/*----------------------------------------------------------------------------*/
+void Usage(void) {
+ EXIT(
+ "Usage: LZSS command filename [filename [...]]\n"
+ "\n"
+ "command:\n"
+ " -d ..... decode 'filename'\n"
+ " -evn ... encode 'filename', VRAM compatible, normal mode (LZ10)\n"
+ " -ewn ... encode 'filename', WRAM compatible, normal mode\n"
+ " -evf ... encode 'filename', VRAM compatible, fast mode\n"
+ " -ewf ... encode 'filename', WRAM compatible, fast mode\n"
+ " -evo ... encode 'filename', VRAM compatible, optimal mode (LZ-CUE)\n"
+ " -ewo ... encode 'filename', WRAM compatible, optimal mode (LZ-CUE)\n"
+ "\n"
+ "* multiple filenames and wildcards are permitted\n"
+ "* the original file is overwritten with the new file\n"
+ );
+}
+
+/*----------------------------------------------------------------------------*/
+char *Load(char *filename, int *length, int min, int max) {
+ FILE *fp;
+ int fs;
+ char *fb;
+
+ if ((fp = fopen(filename, "rb")) == NULL) EXIT("\nFile open error\n");
+ fs = filelength(fileno(fp));
+ if ((fs < min) || (fs > max)) EXIT("\nFile size error\n");
+ fb = Memory(fs + 3, sizeof(char));
+ if (fread(fb, 1, fs, fp) != fs) EXIT("\nFile read error\n");
+ if (fclose(fp) == EOF) EXIT("\nFile close error\n");
+
+ *length = fs;
+
+ return(fb);
+}
+
+/*----------------------------------------------------------------------------*/
+void Save(char *filename, char *buffer, int length) {
+ FILE *fp;
+
+ if ((fp = fopen(filename, "wb")) == NULL) EXIT("\nFile create error\n");
+ if (fwrite(buffer, 1, length, fp) != length) EXIT("\nFile write error\n");
+ if (fclose(fp) == EOF) EXIT("\nFile close error\n");
+}
+
+/*----------------------------------------------------------------------------*/
+char *Memory(int length, int size) {
+ char *fb;
+
+ fb = (char *) calloc(length, size);
+ if (fb == NULL) EXIT("\nMemory error\n");
+
+ return(fb);
+}
+
+/*----------------------------------------------------------------------------*/
+void LZS_Decode(char *filename) {
+ unsigned char *pak_buffer, *raw_buffer, *pak, *raw, *pak_end, *raw_end;
+ unsigned int pak_len, raw_len, header, len, pos;
+ unsigned char flags, mask;
+
+ printf("- decoding '%s'", filename);
+
+ pak_buffer = Load(filename, &pak_len, LZS_MINIM, LZS_MAXIM);
+
+ header = *pak_buffer;
+ if (header != CMD_CODE_10) {
+ free(pak_buffer);
+ BREAK(", WARNING: file is not LZSS encoded!\n");
+ }
+
+ raw_len = *(unsigned int *)pak_buffer >> 8;
+ raw_buffer = (unsigned char *) Memory(raw_len, sizeof(char));
+
+ pak = pak_buffer + 4;
+ raw = raw_buffer;
+ pak_end = pak_buffer + pak_len;
+ raw_end = raw_buffer + raw_len;
+
+ mask = 0;
+
+ while (raw < raw_end) {
+ if (!(mask >>= LZS_SHIFT)) {
+ if (pak == pak_end) break;
+ flags = *pak++;
+ mask = LZS_MASK;
+ }
+
+ if (!(flags & mask)) {
+ if (pak == pak_end) break;
+ *raw++ = *pak++;
+ } else {
+ if (pak + 1 >= pak_end) break;
+ pos = *pak++;
+ pos = (pos << 8) | *pak++;
+ len = (pos >> 12) + LZS_THRESHOLD + 1;
+ if (raw + len > raw_end) {
+ printf(", WARNING: wrong decoded length!");
+ len = raw_end - raw;
+ }
+ pos = (pos & 0xFFF) + 1;
+ while (len--) *raw++ = *(raw - pos);
+ }
+ }
+
+ raw_len = raw - raw_buffer;
+
+ if (raw != raw_end) printf(", WARNING: unexpected end of encoded file!");
+
+ Save(filename, raw_buffer, raw_len);
+
+ free(raw_buffer);
+ free(pak_buffer);
+
+ printf("\n");
+}
+
+/*----------------------------------------------------------------------------*/
+void LZS_Encode(char *filename, int mode) {
+ unsigned char *raw_buffer, *pak_buffer, *new_buffer;
+ unsigned int raw_len, pak_len, new_len;
+
+ lzs_vram = mode & 0xF;
+
+ printf("- encoding '%s'", filename);
+
+ raw_buffer = Load(filename, &raw_len, RAW_MINIM, RAW_MAXIM);
+
+ pak_buffer = NULL;
+ pak_len = LZS_MAXIM + 1;
+
+ if (!(mode & LZS_FAST)) {
+ mode = mode & LZS_BEST ? 1 : 0;
+ new_buffer = LZS_Code(raw_buffer, raw_len, &new_len, mode);
+ } else {
+ new_buffer = LZS_Fast(raw_buffer, raw_len, &new_len);
+ }
+ if (new_len < pak_len) {
+ if (pak_buffer != NULL) free(pak_buffer);
+ pak_buffer = new_buffer;
+ pak_len = new_len;
+ }
+
+ Save(filename, pak_buffer, pak_len);
+
+ free(pak_buffer);
+ free(raw_buffer);
+
+ printf("\n");
+}
+
+/*----------------------------------------------------------------------------*/
+char *LZS_Code(unsigned char *raw_buffer, int raw_len, int *new_len, int best) {
+ unsigned char *pak_buffer, *pak, *raw, *raw_end, *flg;
+ unsigned int pak_len, len, pos, len_best, pos_best;
+ unsigned int len_next, pos_next, len_post, pos_post;
+ unsigned char mask;
+
+#define SEARCH(l,p) { \
+ l = LZS_THRESHOLD; \
+ \
+ pos = raw - raw_buffer >= LZS_N ? LZS_N : raw - raw_buffer; \
+ for ( ; pos > lzs_vram; pos--) { \
+ for (len = 0; len < LZS_F; len++) { \
+ if (raw + len == raw_end) break; \
+ if (*(raw + len) != *(raw + len - pos)) break; \
+ } \
+ \
+ if (len > l) { \
+ p = pos; \
+ if ((l = len) == LZS_F) break; \
+ } \
+ } \
+}
+
+ pak_len = 4 + raw_len + ((raw_len + 7) / 8);
+ pak_buffer = (unsigned char *) Memory(pak_len, sizeof(char));
+
+ *(unsigned int *)pak_buffer = CMD_CODE_10 | (raw_len << 8);
+
+ pak = pak_buffer + 4;
+ raw = raw_buffer;
+ raw_end = raw_buffer + raw_len;
+
+ mask = 0;
+
+ while (raw < raw_end) {
+ if (!(mask >>= LZS_SHIFT)) {
+ *(flg = pak++) = 0;
+ mask = LZS_MASK;
+ }
+
+ SEARCH(len_best, pos_best);
+
+ // LZ-CUE optimization start
+ if (best) {
+ if (len_best > LZS_THRESHOLD) {
+ if (raw + len_best < raw_end) {
+ raw += len_best;
+ SEARCH(len_next, pos_next);
+ raw -= len_best - 1;
+ SEARCH(len_post, pos_post);
+ raw--;
+
+ if (len_next <= LZS_THRESHOLD) len_next = 1;
+ if (len_post <= LZS_THRESHOLD) len_post = 1;
+
+ if (len_best + len_next <= 1 + len_post) len_best = 1;
+ }
+ }
+ }
+ // LZ-CUE optimization end
+
+ if (len_best > LZS_THRESHOLD) {
+ raw += len_best;
+ *flg |= mask;
+ *pak++ = ((len_best - (LZS_THRESHOLD + 1)) << 4) | ((pos_best - 1) >> 8);
+ *pak++ = (pos_best - 1) & 0xFF;
+ } else {
+ *pak++ = *raw++;
+ }
+ }
+
+ *new_len = pak - pak_buffer;
+
+ return(pak_buffer);
+}
+
+/*----------------------------------------------------------------------------*/
+char *LZS_Fast(unsigned char *raw_buffer, int raw_len, int *new_len) {
+ unsigned char *pak_buffer, *pak, *raw, *raw_end, *flg;
+ unsigned int pak_len, len, r, s, len_tmp, i;
+ unsigned char mask;
+
+ pak_len = 4 + raw_len + ((raw_len + 7) / 8);
+ pak_buffer = (unsigned char *) Memory(pak_len, sizeof(char));
+
+ *(unsigned int *)pak_buffer = CMD_CODE_10 | (raw_len << 8);
+
+ pak = pak_buffer + 4;
+ raw = raw_buffer;
+ raw_end = raw_buffer + raw_len;
+
+ LZS_InitTree();
+
+ r = s = 0;
+
+ len = raw_len < LZS_F ? raw_len : LZS_F;
+ while (r < LZS_N - len) ring[r++] = 0;
+
+ for (i = 0; i < len; i++) ring[r + i] = *raw++;
+
+ LZS_InsertNode(r);
+
+ mask = 0;
+
+ while (len) {
+ if (!(mask >>= LZS_SHIFT)) {
+ *(flg = pak++) = 0;
+ mask = LZS_MASK;
+ }
+
+ if (len_ring > len) len_ring = len;
+
+ if (len_ring > LZS_THRESHOLD) {
+ *flg |= mask;
+ pos_ring = ((r - pos_ring) & (LZS_N - 1)) - 1;
+ *pak++ = ((len_ring - LZS_THRESHOLD - 1) << 4) | (pos_ring >> 8);
+ *pak++ = pos_ring & 0xFF;
+ } else {
+ len_ring = 1;
+ *pak++ = ring[r];
+ }
+
+ len_tmp = len_ring;
+ for (i = 0; i < len_tmp; i++) {
+ if (raw == raw_end) break;
+ LZS_DeleteNode(s);
+ ring[s] = *raw++;
+ if (s < LZS_F - 1) ring[s + LZS_N] = ring[s];
+ s = (s + 1) & (LZS_N - 1);
+ r = (r + 1) & (LZS_N - 1);
+ LZS_InsertNode(r);
+ }
+ while (i++ < len_tmp) {
+ LZS_DeleteNode(s);
+ s = (s + 1) & (LZS_N - 1);
+ r = (r + 1) & (LZS_N - 1);
+ if (--len) LZS_InsertNode(r);
+ }
+ }
+
+ *new_len = pak - pak_buffer;
+
+ return(pak_buffer);
+}
+
+/*----------------------------------------------------------------------------*/
+void LZS_InitTree(void) {
+ int i;
+
+ for (i = LZS_N + 1; i <= LZS_N + 256; i++)
+ rson[i] = LZS_NIL;
+
+ for (i = 0; i < LZS_N; i++)
+ dad[i] = LZS_NIL;
+}
+
+/*----------------------------------------------------------------------------*/
+void LZS_InsertNode(int r) {
+ unsigned char *key;
+ int i, p, cmp, prev;
+
+ prev = (r - 1) & (LZS_N - 1);
+
+ cmp = 1;
+ len_ring = 0;
+
+ key = &ring[r];
+ p = LZS_N + 1 + key[0];
+
+ rson[r] = lson[r] = LZS_NIL;
+
+ for ( ; ; ) {
+ if (cmp >= 0) {
+ if (rson[p] != LZS_NIL) p = rson[p];
+ else { rson[p] = r; dad[r] = p; return; }
+ } else {
+ if (lson[p] != LZS_NIL) p = lson[p];
+ else { lson[p] = r; dad[r] = p; return; }
+ }
+
+ for (i = 1; i < LZS_F; i++)
+ if ((cmp = key[i] - ring[p + i])) break;
+
+ if (i > len_ring) {
+ if (!lzs_vram || (p != prev)) {
+ pos_ring = p;
+ if ((len_ring = i) == LZS_F) break;
+ }
+ }
+ }
+
+ dad[r] = dad[p]; lson[r] = lson[p]; rson[r] = rson[p];
+
+ dad[lson[p]] = r; dad[rson[p]] = r;
+
+ if (rson[dad[p]] == p) rson[dad[p]] = r;
+ else lson[dad[p]] = r;
+
+ dad[p] = LZS_NIL;
+}
+
+/*----------------------------------------------------------------------------*/
+void LZS_DeleteNode(int p) {
+ int q;
+
+ if (dad[p] == LZS_NIL) return;
+
+ if (rson[p] == LZS_NIL) {
+ q = lson[p];
+ } else if (lson[p] == LZS_NIL) {
+ q = rson[p];
+ } else {
+ q = lson[p];
+ if (rson[q] != LZS_NIL) {
+ do {
+ q = rson[q];
+ } while (rson[q] != LZS_NIL);
+
+ rson[dad[q]] = lson[q]; dad[lson[q]] = dad[q];
+ lson[q] = lson[p]; dad[lson[p]] = q;
+ }
+
+ rson[q] = rson[p]; dad[rson[p]] = q;
+ }
+
+ dad[q] = dad[p];
+
+ if (rson[dad[p]] == p) rson[dad[p]] = q;
+ else lson[dad[p]] = q;
+
+ dad[p] = LZS_NIL;
+}
+
+/*----------------------------------------------------------------------------*/
+/*-- EOF Copyright (C) 2011 CUE --*/
+/*----------------------------------------------------------------------------*/
diff --git a/tools/pythonlib/utils/lzss.exe b/tools/pythonlib/utils/lzss.exe
new file mode 100644
index 0000000..ee7db3b
Binary files /dev/null and b/tools/pythonlib/utils/lzss.exe differ
diff --git a/tools/pythonlib/utils/lzx.exe b/tools/pythonlib/utils/lzx.exe
new file mode 100644
index 0000000..b5c87fa
Binary files /dev/null and b/tools/pythonlib/utils/lzx.exe differ
diff --git a/tools/pythonlib/utils/ndstool.exe b/tools/pythonlib/utils/ndstool.exe
new file mode 100644
index 0000000..df24478
Binary files /dev/null and b/tools/pythonlib/utils/ndstool.exe differ