Workshop on the Scaling Behavior of Large Language Models

SCALE-LLM 2024

Malta, 22 March 2024, co-located with EACL 2024

Introduction

The purpose of this workshop is to provide a venue to share and discuss results of investigations into the scaling behavior of Large Language Models (LLMs). We are particularly interested in results displaying "interesting" scaling curves (e.g., inverse, u-shaped, or inverse u-shaped scaling curves) for a variety of tasks. These results, where the performance of the LLMs decreases with increasing model size or follows a non-monotonic trend, deviating from the expected "the bigger, the better" positive scaling laws, are of great scientific interest as they can reveal intrinsic limitations of current LLM architectures and training paradigms and they provide novel research directions towards a better understanding of these models and of possible approaches to improve them. Recently, there has been an increasing interest in these phenomena from the research community, culminating in the Inverse Scaling Prize ([McKenzie et al. 2023](pdf/inverse_scaling_prize_paper.pdf)), which solicited tasks to be systematically evaluated according to a standardized protocol in order to perform a systematic study. The SCALE-LLM Workshop will expand these efforts. In contrast to the Inverse Scaling Prize, which focused on zero-shot tasks with a fixed format, we are also interested in, for example, few-shot and alternate prompting strategies (e.g. Chain-of-Thoughts), multi-step interactions (e.g. Tree-of-Thoughts, self-critique), hardening against prompt injection attacks (e.g. user input escaping, canary tokens), etc.

Main Workshop Topics

The workshop will provide focused discussions on multiple topics in the general field of Scaling behavior of Large Language Models, including, but not limited to the following: 1. Novel tasks that exhibit Inverse, U-shaped, Inverse U-shaped or other types of scaling; 2. Scaling behavior of fine-tuned or purpose-built models, in particular in-distribution (w.r.t. the fine-tuning dataset) vs. out-of-distribution; 3. Scaling with adaptive prompting strategies, e.g. allowing intermediate "reasoning" steps, model self-critique or use of external tools; 4. Scaling w.r.t. additional dimensions, such as the number of in-context/fine-tuning examples, the number of "reasoning" steps, or the intrinsic task complexity; 5. Scaling on non-English language tasks, in particular low-resource languages, where models might exhibit tradeoffs as high-resource language training data overwhelms low-resource language capabilities; 6. Scaling w.r.t. qualitative characteristics: internal aspects (e.g. modularity, mechanistic interpretability), calibration, uncertainty, effectiveness of various techniques (pruning, defences against adversarial attacks, etc.).

Important dates

* **Workshop paper submission deadline: ~~December 18, 2023~~ December 25, 2023 (extended)** * EACL rejected paper submission deadline (ARR pre-reviewed): January 17, 2024 * Notification of acceptance: January 27, 2024 * Camera-ready papers due: February 6, 2024 * Workshop date: March 22, 2024

Submission instructions

We solicit short and long paper submissions with no more than **4 and 8 pages**, respectively, plus unlimited pages for references and appendices. We welcome novel research, system descriptions and position papers. Papers must contain "**Limitations**" and "**Ethics Statement**" sections which will not count towards the page limit. Upon acceptance, **one additional page** will be provided to address the reviewers' comments. Paper submissions must use the official [ACL style templates](https://github.com/acl-org/acl-style-files) and must follow the [ACL formatting guidelines](https://acl-org.github.io/ACLPUB/formatting.html). All submissions must be anonymous. De-anonymized versions of the submitted papers **may** be released on pre-print servers such as arXiv, however, we kindly ask the authors not discuss these papers on social media during the review period. Please, **send your submissions** to our [OpenReview interface](https://openreview.net/group?id=eacl.org/EACL/2024/Workshop/SCALE-LLM). We can also consider papers submitted via **ACL Rolling Reviews (ARR)** to EACL and rejected. A paper may not be simultaneously under review through ARR and SCALE-LLM. A paper that has or will receive reviews through ARR may not be submitted for review to SCALE-LLM. Keep in mind that ARR has stricter anonymity requirements regarding pre-print servers and social media, so make sure you do not de-anonymize papers submitted through ARR by posting them on arXiv or social media. Please refer to the [ARR instructions for autors](https://aclrollingreview.org/authors) for more information. Accepted papers will be published in the Proceedings of the First Workshop on the Scaling Behavior of Large Language Models, however you can request us not to publish your paper if you prefer so (e.g. if the paper has already been submitted to or published in another venue).

Student scholarship

Thanks to our Platinum sponsor Google, we can offer financial support to a limited number of students from low-income countries or other disadvantaged financial situation who would like to participate to the SCALE-LLM workshop. We may able to cover the EACL virtual conference registration fee. We will prioritize students who are authors of one of the accepted papers. If you are interested in receiving financial support, please [contact us](correspondence) before January 30 2024, explaning your situation.

Invited Speakers

Ian McKenzie and Najoung Kim will each give a keynote talk. ### Ian McKenzie: Inverse Scaling: When Bigger isn't Better Abstract: Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). I'll discuss the phenomenon of "inverse scaling": that LMs may show worse task performance with increased scale, e.g., due to flaws in the training objective and data. We collected empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize. Through analysis of the datasets, along with other examples found in the literature, we identified four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are not always monotonic and that existing scaling laws less reliable at predicting the behavior of larger-scale models than previously understood. Our results suggest that there are tasks for which increased model scale alone may not lead to improved performance, and that more careful thought needs to go into the data and objectives for training language models. Ian McKenzie is the main organizer of the Inverse Scaling Prize and first author of the associated paper, currently he is a contracting Research Engineer on OpenAI's Dangerous Capability Evaluations project. ### Najoung Kim: Inverse scaling: mitigation strategies and open questions Abstract: The Inverse Scaling Competition (McKenzie et al. 2023) solicited downstream tasks whose performance inversely correlates with model and training data size, leading to discoveries of various tasks that exhibit this pattern. I will discuss one known inference-time solution to this problem of using task demonstrations---even one-shot in-context examples often suffice to change the scaling pattern of the task from inverse to U-shaped or flat (Wei et al. 2023). However, this solution does not generalize to inverse scaling problems in the broader scope that do not adhere to the specific task formulations adopted by McKenzie et al. (2023). As an example, I will discuss a finding where more pretraining data leads to less effective training of novel token representations in the context of compositional generalization (Kim et al. 2022), as well as other relevant observations in the recent literature pointing to a wider range of open questions. Dr. Kim is an Assistant Professor at Boston University and a researcher at Google. She is also one of the authors of the Inverse Scaling Prize paper as well as other foundational works in this field.
Najoung Kim

Najoung Kim

Assistant Professor at Boston University

Ian McKenzie

Ian McKenzie

Contracting Research Engineer at OpenAI

Schedule

Program overview (all times are GMT+1): - 09:00 - 09:15 Opening Remarks - 09:15 - 09:45 Invited Talk 1 - Ian McKenzie - 09:45 - 10:30 Oral presentations - 10:30 - 14:00 Break - 14:00 - 14:30 Invited talk 2 - Najoung Kim - 14:30 - 15:15 Panel discussion - 15:15 - 15:30 Best paper announcement and closing remarks - 15:30 - 17:30 Poster session

Accepted papers

### Oral presentations - [Scaling Behavior of Machine Translation with Large Language Models under Prompt Injection Attacks](pdf/4.pdf) - Zhifan Sun, Antonio Valerio Miceli-Barone - [InstructEval: Towards Holistic Evaluation of Instruction-Tuned Large Language Models](pdf/9.pdf) - Yew Ken Chia, Pengfei Hong, Lidong Bing, Soujanya Poria - Findings of EACL: [When do Generative Query and Document Expansions Fail? A Comprehensive Study Across Methods, Retrievers, and Datasets](https://aclanthology.org/2024.findings-eacl.134/) - Orion Weller, Kyle Lo, David Wadden, Dawn Lawrie, Benjamin Van Durme, Arman Cohan, Luca Soldaini ### Posters - [A Proposal for Scaling the Scaling Laws](pdf/2.pdf) - Wout Schellaert, Ronan Hamon, Fernando Martínez-Plumed, Jose Hernandez-Orallo - [Can Large Language Models Reason About Goal-Oriented Tasks?](pdf/5.pdf) - Filippos Bellos, Yayuan Li, Wuao Liu, Jason J Corso - [Detecting Mode Collapse in Language Models via Narration](pdf/10.pdf) - Sil Hamilton

Organizing Committee

Antonio Valerio Miceli-Barone

Antonio Valerio Miceli-Barone

Research Associate, University of Edinburgh

Fazl Barez

Fazl Barez

Research fellow, University of Oxford

Shay Cohen

Shay Cohen

Reader, University of Edinburgh

Elena Voita

Elena Voita

Research Scientist, Meta

Ulrich Germann

Ulrich Germann

Senior Computing Officer (Research)

Michal Lukasik

Michal Lukasik

Researcher, Google Research

Sponsors

Correspondence

Email: amiceli@ed.ac.uk