Arcee AI Trinity Large: An Open 400B MoE Model
Architecture Breakdown + Live Code Review Demo
I built a code review app powered by Arcee AI's new Trinity Large model and deployed it to a Hugging Face Space. Plenty of examples are included, and you also get verdicts from Linus Torvalds, Donald Knuth, and Bjarne Stroustrup ๐
In this video, I walk through a live demo โ paste a GitHub URL, hit review, and watch it catch real issues: security flaws, logic bugs, and missing edge cases. It's fast, free to try, and the large context window lets it process entire files without chunking.
Trinity Large is a 400B parameter Mixture-of-Experts model trained in 33 days for $20M โ a fraction of what frontier labs spend. It has 256 experts but activates only 4 per token, so only 13B active parameters do the work. That translates to 2-3x faster inference than anything in its weight class, which you can feel in the demo when reviews come back in seconds.
