GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation

Abstract

GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained from hundreds of billion scale data. Here we address some remaining issues the GPT-3 paper less reported, such as a non-English LM, the capabilities of different sized models, and the effect of the recently introduced prompt optimization on the in-context learning. To achieve this, we introduce HyperCLOVA, a Korean variant of 82B GPT-3 trained with the corpus of Korean-centric 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVA with our training configuration shows state-of-the-art in-context zero-shot and few-shot learning performances on downstream tasks in Korean. Also, we show the performances benefit from prompt-based learning and demonstrate how it can be integrated into the prompt engineering pipeline. Then we discuss the possibility of materializing the No Code AI paradigm by providing AI prototyping capabilities to non-experts of ML by introducing HyperCLOVA studio, an interactive prompt engineering interface. Lastly, we demonstrate the potential of our methods with three successful in-house applications.

Publication
Findings of EMNLP
Date