えー! そんじゃ最初にcpt読み込ませてる意味が分からん・・追加教育出来ないの? ってトコでclaudeのchat長制限来たw
Always include ALL your source files in the files list when training. If you want to add new files later:
# Include both old AND new files files = ["yin.wav", "yang.wav", "new1.wav", "new2.wav"]
This explains perfectly why your model keeps generating the same patterns - it's literally forgotten everything except the last training session! You need to retrain with all files together to get the mixed, varied output you want.
続きをchatgptのほうに説明してもらったら
A Different File With the Same Name
The model assumes it's seeing the same "concept"
→ But now the content is different (e.g. yin.wav was soft before, now it’s harsh)
→ The model says “oh! I must have misunderstood this keyword”
→ It adapts to fit the new version, and forgets the old one
This is catastrophic forgetting — classic in small models.
小さなモデルに生じがちなカタストロフィック忘却に陥るんだと?
ノートを閉じそうになって思いついたコトを聞いてみた。結果としては「ターゲットのキーワードに紐づけるwavを一度にある程度の個数用意しといて、追加学習の際に新規追加や撤退させるのを微細な変化にしてやれば、リセットされちゃわない」っぽ
🧠 Keep all your previous training files
🆕 Add just one or a few new .wav files each session
✅ Continue training "niri" using the full list every time
You’ll build up "niri" as a flexible but consistent style — a real sonic persona.
それで
This is basically doing cumulative learning, also called continual fine-tuning.
となるらしい
「だから、さっきそう言っただろ!」的にキレないとこが有り難いね。多分以前に指摘されてる解決法なのに、その時点では何を言われてるのか不明だったからな