LongCat-Next: Turning Images, Audio, and Text into Tokens – What’s Next?
LongCat-Next is a 68.5‑billion‑parameter discrete‑native autoregressive multimodal model that tokenizes images, audio and text, challenges the belief that visual tokenization loses detail, matches specialized models on fine‑grained tasks, and demonstrates that joint understanding‑generation training can even improve generation quality.
