EXAONE 4.5
modelLG AI Research's first open-weight vision-language model. Integrates a 1.2B vision encoder with the EXAONE 4.0 32B LLM via native multimodal pretraining. Supports 256K token context for long-document and enterprise-scale deployments. Trained with emphasis on document-centric corpora aligned with LG's industrial applications.
Outperforms state-of-the-art models of similar scale on document understanding and Korean contextual reasoning. Achieves 77.3 average across STEM benchmarks, 81.4 on LiveCodeBench v6, 62.2 on ChartQA Pro. Open weights released.
Model Details
Architecture DENSE
Parameters 33B
Context window 256,000