Seven days prior, the AI startup Motif Technologies introduced its internally developed large language model (LLM) called ‘Motif 12.7B,’ featuring upgraded features only seven weeks after launching a more compact model. The firm has showcased its expertise in AI development by quickly releasing powerful LLMs.
The Motif 12.7B model, made available on Hugging Face, was created entirely internally—from the design of the model to the training of data—using 12.7 billion parameters. It greatly improved its abilities by incorporating advanced GPU utilization and expertise in LLM development into existing models. The company highlighted that it achieved significant advancements in model performance and learning efficiency through its unique ‘group-wise differential attention’ technology and ‘muon optimizer parallelization algorithm’ technology.
Thanks to these technological advancements, Motif Technologies enhanced cost efficiency across the development and operational stages. By skipping the reinforcement learning step in development, it greatly lowered expensive training demands. During operations, the model automatically bypasses unnecessary inference calculations, resulting in lower GPU consumption, easier model management, and reduced response delay.
In benchmark tests that evaluate mathematical, scientific, and logical reasoning skills, the model surpassed Alibaba’s Qwen2.5, which has 72 billion parameters. When compared to Google’s Gemma3 of a similar category, it achieved better results in critical inference performance indicators. Lim Jeong-hwan, CEO of Motif Technologies, remarked, “Motif 12.7B is an example that illustrates the structural development of AI models beyond just performance enhancements.” He further noted, “The group differential attention technology and muon optimizer parallelization algorithm are innovations that have reimagined the ‘brain’ and energy efficiency of LLMs, respectively,” and highlighted, “This represents a case of structural evolution in AI models.”
Motif Technologies intends to launch a 100B-parameter large language model at a later date and will make its text-to-video (T2V) model available as open-source by the end of this year.






Leave a comment