-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] エンジンのモック作成+それを使ったコンポーネントテスト #2152
base: main
Are you sure you want to change the base?
The head ref may contain hidden characters: "\u30A8\u30F3\u30B8\u30F3\u306Emock\u3092\u4F5C\u308B"
Conversation
テーマ周りで気になった挙動まとめ
|
とりあえずスナップショットテストができた! |
e9a6b07
to
1240838
Compare
AudioContextはシーケンスの作成で使用しています、この処理を関数化して外に出すPR(#2275)を作ってみました。 |
おおなるほどです!!ありがとうございます!! |
適当にノートを足してRENDERしてみた感じ、ちょっとさすがAudioContextがないと大変そうだなとなりました!! ちなみに書いたコードはこんな感じです。 import { createStoreWrapper } from "@/store";
import { NoteId, TrackId } from "@/type/preload";
import { resetMockMode, uuid4 } from "@/helpers/random";
import { cloneWithUnwrapProxy } from "@/helpers/cloneWithUnwrapProxy";
import { createDefaultTrack } from "@/sing/domain";
import { proxyStoreCreator } from "@/store/proxy";
import { createOpenAPIEngineMock } from "@/mock/engineMock";
import { SandboxKey, Sandbox } from "@/type/preload";
const store = createStoreWrapper({
proxyStoreDI: proxyStoreCreator(createOpenAPIEngineMock()),
});
const initialState = cloneWithUnwrapProxy(store.state);
beforeEach(() => {
store.replaceState(initialState);
resetMockMode();
});
describe("RENDER", async () => {
// FIXME: あとで汎用的にする
// @ts-expect-error mockのためにreadonlyに代入している
window[SandboxKey] = {
logInfo: (...args: unknown[]) => {
console.log("[logInfo]", ...args);
},
logWarn: (...args: unknown[]) => {
console.warn("[logWarn]", ...args);
},
logError: (...args: unknown[]) => {
console.error("[logError]", ...args);
},
};
it("空のトラックをレンダリングできる", async () => {
const { trackId, track } = await store.actions.CREATE_TRACK();
store.mutations.INSERT_TRACK({
trackId,
track,
prevTrackId: undefined,
});
await store.actions.RENDER();
await vi.waitFor(() => {
if (store.state.nowRendering) {
throw new Error("now rendering");
}
});
});
it("ノートがあるトラックをレンダリングできる", async () => {
const { trackId, track } = await store.actions.CREATE_TRACK();
track.notes.push({
id: NoteId(uuid4()),
position: 0,
duration: 1,
noteNumber: 60,
lyric: "あ",
});
store.mutations.INSERT_TRACK({
trackId,
track,
prevTrackId: undefined,
});
await store.actions.RENDER();
await vi.waitFor(() => {
if (store.state.nowRendering) {
throw new Error("now rendering");
}
});
});
}); |
ノート IDのプルリクを分けました! テーマ周りもプルリグ分けました! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
できました。
でもプルリクエストが大きすぎてどうしようか迷い中です・・・。
とりあえずe2eテストでも使えるようにできるのか試してみようと思います。それができてからどうやってマージして行こうか考えようと思います。
📝 kuromojiはデフォルトで(デバッグ用に)入ってる感じで良さそう。 |
playwrightの通信をモックする形でエンジンAPIを偽装していたけど、VuexにDIする形にするならこれが不要になるはず。 await page.route(/\/version$/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify("mock"),
});
});
await page.route(/\/engine_manifest$/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(EngineManifestToJSON(getEngineManifestMock())),
});
});
await page.route(/\/supported_devices$/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(
SupportedDevicesInfoToJSON({ cpu: true, cuda: false, dml: false }),
),
});
});
await page.route(new RegExp(`/${assetsPath}/`), async (route) => {
const filePath = path.join(
__dirname,
"..",
"..",
"..",
new URL(route.request().url()).pathname,
);
const body = await fs.readFile(filePath);
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "image/png",
},
body,
});
});
await page.route(/\/speakers$/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(speakers.map(SpeakerToJSON)),
});
});
await page.route(/\/speaker_info\?/, async (route) => {
const query = new URLSearchParams(route.request().url().split("?")[1]);
const speakerUuid = query.get("speaker_uuid");
if (speakerUuid == null) {
throw new Error("speaker_uuid is required");
}
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(SpeakerInfoToJSON(getSpeakerInfoMock(speakerUuid))),
});
});
await page.route(/\/singers$/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(singers.map(SpeakerToJSON)),
});
});
await page.route(/\/singer_info\?/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const speakerUuid = payload.get("speaker_uuid");
if (speakerUuid == undefined) {
throw new Error("speaker_uuid is required");
}
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(SpeakerInfoToJSON(getSpeakerInfoMock(speakerUuid))),
});
});
await page.route(/\/is_initialized_speaker/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(true),
});
});
await page.route(/\/initialize_speaker/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
});
});
// NOTE: 空のユーザ辞書を返す
await page.route(/\/user_dict$/, async (route) => {
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify([]),
});
});
await page.route(/\/audio_query/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const text = payload.get("text");
const speaker = Number(payload.get("speaker"));
if (text == undefined || speaker == undefined) {
throw new Error("text, speaker is required");
}
const accentPhrases = await textToActtentPhrasesMock(text, speaker);
return route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(
AudioQueryToJSON({
accentPhrases,
speedScale: 1.0,
pitchScale: 0.0,
intonationScale: 1.0,
volumeScale: 1.0,
prePhonemeLength: 0.1,
postPhonemeLength: 0.1,
outputSamplingRate: getEngineManifestMock().defaultSamplingRate,
outputStereo: false,
}),
),
});
});
await page.route(/\/accent_phrases/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const text = payload.get("text");
const speaker = Number(payload.get("speaker"));
if (text == undefined || speaker == undefined) {
throw new Error("text, speaker is required");
}
const isKana = payload.get("is_kana") === "true";
if (isKana) {
throw new Error("AquesTalk風記法は未対応です");
}
const accentPhrases = await textToActtentPhrasesMock(text, speaker);
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(accentPhrases.map(AccentPhraseToJSON)),
});
});
await page.route(/\/mora_data/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const speaker = Number(payload.get("speaker"));
const accentPhraseRaw = route.request().postData();
if (accentPhraseRaw == undefined || speaker == undefined) {
throw new Error("accent_phrase, speaker is required");
}
const accentPhrase = (JSON.parse(accentPhraseRaw) as []).map(
AccentPhraseFromJSON,
);
replaceLengthMock(accentPhrase, speaker);
replacePitchMock(accentPhrase, speaker);
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(accentPhrase.map(AccentPhraseToJSON)),
});
});
await page.route(/\/synthesis/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const speaker = Number(payload.get("speaker"));
const enableInterrogativeUpspeak =
payload.get("enable_interrogative_upspeak") === "true";
const audioQueryRaw = route.request().postData();
if (audioQueryRaw == undefined || speaker == undefined) {
throw new Error("audio_query, speaker is required");
}
const audioQuery = AudioQueryFromJSON(JSON.parse(audioQueryRaw));
const frameAudioQuery = audioQueryToFrameAudioQueryMock(audioQuery, {
enableInterrogativeUpspeak,
});
const buffer = synthesisFrameAudioQueryMock(frameAudioQuery, speaker);
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "audio/wav",
},
body: Buffer.from(buffer),
});
});
await page.route(/\/sing_frame_audio_query/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const speaker = Number(payload.get("speaker"));
const scoreRaw = route.request().postData();
if (scoreRaw == undefined || speaker == undefined) {
throw new Error("score, speaker is required");
}
const score = ScoreFromJSON(JSON.parse(scoreRaw));
const phonemes = notesToFramePhonemesMock(score.notes, speaker);
const f0 = notesAndFramePhonemesToPitchMock(score.notes, phonemes, speaker);
const volume = notesAndFramePhonemesAndPitchToVolumeMock(
score.notes,
phonemes,
f0,
speaker,
);
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(
FrameAudioQueryToJSON({
f0,
volume,
phonemes,
volumeScale: 1.0,
outputSamplingRate: getEngineManifestMock().defaultSamplingRate,
outputStereo: false,
}),
),
});
});
await page.route(/\/sing_frame_volume/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const speaker = Number(payload.get("speaker"));
const raw = route.request().postData();
if (raw == undefined || speaker == undefined) {
throw new Error("score, speaker is required");
}
const { score, frameAudioQuery } =
BodySingFrameVolumeSingFrameVolumePostFromJSON(JSON.parse(raw));
const volume = notesAndFramePhonemesAndPitchToVolumeMock(
score.notes,
frameAudioQuery.phonemes,
frameAudioQuery.f0,
speaker,
);
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "application/json",
},
body: JSON.stringify(volume),
});
});
await page.route(/\/frame_synthesis/, async (route) => {
const payload = new URLSearchParams(new URL(route.request().url()).search);
const speaker = Number(payload.get("speaker"));
const frameAudioQueryRaw = route.request().postData();
if (frameAudioQueryRaw == undefined || speaker == undefined) {
throw new Error("frame_audio_query, speaker is required");
}
const frameAudioQuery = FrameAudioQueryFromJSON(
JSON.parse(frameAudioQueryRaw),
);
const buffer = synthesisFrameAudioQueryMock(frameAudioQuery, speaker);
await route.fulfill({
status: 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Content-Type": "audio/wav",
},
body: Buffer.from(buffer),
}); |
📝 とりあえずデフォルトエンジンURLに |
📝 Nodeのときもkuromojiのブラウザの方のクラスを使いたい。 あと実装を外のリポジトリに配置したい。 |
内容
の解決を目指したプルリクエストです。
ついでにストーリーブック上でコンポーネントテストする方法を色々試そうとしてます。
TalkEditorの表示と、モックエンジンを使ったピッチ推論までできたのですが、なぜかscssが読み込まれずにスプリッターの色指定がうまくいってないです。
Viteとかの設定な気がしないでもないので、詳しい方いらっしゃったらヘルプいただけると助かります 🙇
追記:わかりました!!!たぶん色の初期化をしてないからでした!!
関連 Issue
fix #2144
スクリーンショット・動画など
こんな感じで境界線がない。多分正確には透明になってる。
その他