Tl;dr: We’re starting to be in NLP conference season - with fresh batches of NAACL papers making the rounds on X. One that stood out was CodecLM, a Google paper that proposes creating synthetic data for instruction tuning. The idea is to take one of their big language models and train it to create ‘synthetic instructions’ that can help a smaller language model perform a given task more effectively. Then completely unrelated, MAP-NEO is a new fully open-source pre-trained model; basically like Pythia or OLMo. The authors pinky promise is it super-duper open source unlike Pythia which is only open source. It’s true that while Pythia is open source, actually using the model and data can be difficult due to bad documentation and outdated training frameworks. Hopefully MAP-NEO is easy to use.
FYI: My ‘popularity emoji’ is based on aggregate statistics of how many people have engaged with a paper on Twitter/X (as well as my own subjective personal interest).
Very popular (you really should know about this): 🔥
Popular (a good amount of people are discussing this): 😄
Less popular (but still worth making a mental note) : 🙂
CodecLM: Aligning Language Models with Tailored Synthetic Data
Popularity: 😄
Instruction tuning has emerged as the key in aligning large language models (LLMs) with specific task instructions, thereby mitigating the discrepancy between the next-token prediction objective and users' actual goals. To reduce the labor and time cost to collect or annotate data by humans, researchers start to explore the use of LLMs to generate instruction-aligned synthetic data. Recent works focus on generating diverse instructions and applying LLM to increase instruction complexity, often neglecting downstream use cases. It remains unclear how to tailor high-quality data to elicit better instruction-following abilities in different target instruction distributions and LLMs. To this end, we introduce CodecLM, a general framework for adaptively generating high-quality synthetic data for LLM alignment with different downstream instruction distributions and LLMs. Drawing on the Encode-Decode principles, we use LLMs as codecs to guide the data generation process. We first encode seed instructions into metadata, which are concise keywords generated on-the-fly to capture the target instruction distribution, and then decode metadata to create tailored instructions. We also introduce Self-Rubrics and Contrastive Filtering during decoding to tailor data-efficient samples. Extensive experiments on four open-domain instruction following benchmarks validate the effectiveness of CodecLM over the current state-of-the-arts.
MAP-Neo: Highly Capable and Transparent Bilingual Large Language Model Series
Popularity: 😄
Large Language Models (LLMs) have made great strides in recent years to achieve unprecedented performance across different tasks. However, due to commercial interest, the most competitive models like GPT, Gemini, and Claude have been gated behind proprietary interfaces without disclosing the training details. Recently, many institutions have open-sourced several strong LLMs like LLaMA-3, comparable to existing closed-source LLMs. However, only the model's weights are provided with most details (e.g., intermediate checkpoints, pre-training corpus, and training code, etc.) being undisclosed. To improve the transparency of LLMs, the research community has formed to open-source truly open LLMs (e.g., Pythia, Amber, OLMo), where more details (e.g., pre-training corpus and training code) are being provided. These models have greatly advanced the scientific study of these large models including their strengths, weaknesses, biases and risks. However, we observe that the existing truly open LLMs on reasoning, knowledge, and coding tasks are still inferior to existing state-of-the-art LLMs with similar model sizes. To this end, we open-source MAP-Neo, a highly capable and transparent bilingual language model with 7B parameters trained from scratch on 4.5T high-quality tokens. Our MAP-Neo is the first fully open-sourced bilingual LLM with comparable performance compared to existing state-of-the-art LLMs. Moreover, we open-source all details to reproduce our MAP-Neo, where the cleaned pre-training corpus, data cleaning pipeline, checkpoints, and well-optimized training/evaluation framework are provided. Finally, we hope our MAP-Neo will enhance and strengthen the open research community and inspire more innovations and creativities to facilitate the further improvements of LLMs.