A Unified Framework for Neural Network Reprogrammability: From Model Reprogramming to Prompt Tuning

This article surveys recent advances in neural network reprogrammability, presenting a unified framework that categorizes model reprogramming, prompt tuning, prompt instruction, and in‑context learning, highlights the shift from parameter‑centric to reprogrammability‑centric adaptation, and provides efficiency analyses, taxonomy, and practical case studies.

AI Frontier Lectures
AI Frontier Lectures
AI Frontier Lectures
A Unified Framework for Neural Network Reprogrammability: From Model Reprogramming to Prompt Tuning

Motivation

Researchers aim to reuse large pretrained models while changing as few model parameters as possible. Techniques such as model reprogramming, parameter‑efficient fine‑tuning (PEFT), prompt tuning, prompt instruction, and in‑context learning all share this goal.

Unified Framework: Neural Network Reprogrammability

A recent survey and AAAI‑2026 tutorial introduced the term Neural Network Reprogrammability to describe a family of methods that keep the pretrained backbone frozen and instead modify how tasks are presented to the model. The framework characterises each method along four axes:

Manipulation location : input space, embedding space, or hidden space.

Manipulation type : learnable (optimised) vs. fixed (hand‑crafted).

Manipulation operator : additive, concatenative, or parametric.

Output alignment : identity, structural, statistical, or linear mapping.

Mathematical Formulation

Let f be a frozen pretrained model with source‑domain input space X_s and output space Y_s. Reprogrammability introduces two configurable transformations:

Input manipulation g_{in}: X_t \rightarrow X_s – maps target‑domain inputs X_t to a format the model can process (e.g., additive perturbations, concatenated prompts, or parametric encodings).

Output alignment g_{out}: Y_s \rightarrow Y_t – maps the model’s predictions to the target‑domain label space Y_t (e.g., label mapping, linear projection, or structured parsing).

The reprogrammed system computes y_t = g_{out}(f(g_{in}(x_t))), requiring trainable parameters only for g_{in} and g_{out} while f remains unchanged.

Efficiency Illustration

Adapting a Vision Transformer (ViT‑B/32) pretrained on ImageNet to the EuroSAT remote‑sensing classification task demonstrates the parameter efficiency of reprogrammability‑centric adaptation (RCA). Compared with traditional parameter‑centric adaptation (PCA) configurations, RCA reduces the number of trainable parameters by 2–3 orders of magnitude while achieving comparable accuracy. This makes RCA attractive for resource‑constrained environments and multi‑task deployment.

Representative Methods

Model Reprogramming (MR) : learns an additive perturbation applied directly to raw inputs (e.g., images) and aligns outputs via a label‑mapping function. Requires access to the model’s input and output interfaces.

Prompt Tuning (PT) : inserts learnable token embeddings into intermediate layers (embedding or hidden space). The backbone stays frozen; only the prompt embeddings are optimised.

Prompt Instruction (PI) : supplies fixed demonstration examples and textual instructions as part of the input context. No parameters are learned; the model follows the provided context.

In‑Context Learning and Chain‑of‑Thought

In‑context learning (ICL) is a special case of RCA where the input manipulation is a fixed concatenation of demonstration examples with the query. No parameters are updated, and the model’s output is used directly (implicit output alignment). Chain‑of‑Thought (CoT) reasoning extends ICL by adding explicit intermediate reasoning steps to the input. The model generates a reasoning sequence that must be parsed with a structural output alignment to extract the final answer.

Taxonomy of Reprogrammability Methods

The four axes described above provide a systematic way to organise existing works. For example, MR uses an additive operator on the input space with learnable parameters and a label‑mapping output alignment; PT uses a concatenative operator on the embedding/hidden space with learnable parameters and typically an identity or linear output alignment; PI uses a concatenative operator on the input space, is fixed (no learnable parameters), and relies on implicit output alignment.

Concrete Case Studies

Image Classification via Model Reprogramming :

Input manipulation: resize target image and add a learnable perturbation \lambda to match the pretrained classifier’s expected format.

Frozen model: a pretrained ResNet or ViT processes the perturbed image.

Output alignment: a label‑mapping function converts the source‑domain class predictions to the target task’s label set.

Training: only \lambda is optimised.

Text Generation via Prompt Tuning :

Input manipulation: prepend learnable prompt tokens \lambda to the target text.

Frozen model: a pretrained language model (e.g., GPT) generates text conditioned on the enhanced prompt.

Output alignment: identity (the model already outputs in the desired language space).

Training: only the prompt tokens \lambda are updated.

Resource Repository

An “Awesome Neural Network Reprogrammability” collection on GitHub aggregates papers, code, and datasets for this field. The repository can be accessed at https://zyecs.github.io/awesome-reprogrammability/ and includes a tutorial sub‑directory at https://zyecs.github.io/awesome-reprogrammability/tutorial-AAAI26/.

Illustration of model reprogramming
Illustration of model reprogramming
prompt tuningTaxonomySurveyparameter-efficient fine-tuningModel AdaptationNeural Network Reprogrammability
AI Frontier Lectures
Written by

AI Frontier Lectures

Leading AI knowledge platform

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.