Tag

PEFT

0 views collected around this technical thread.

vivo Internet Technology
vivo Internet Technology
Feb 12, 2025 · Artificial Intelligence

Bidirectional Optimization of NLLB-200 and ChatGPT for Low-Resource Language Translation

The paper proposes a bidirectional optimization framework that fine‑tunes the low‑resource NLLB‑200 translation model with LoRA using data generated by ChatGPT, while also translating low‑resource prompts with NLLB before feeding them to LLMs, thereby improving multilingual translation quality yet requiring careful validation of noisy synthetic data.

Fine-tuningLLMLoRA
0 likes · 28 min read
Bidirectional Optimization of NLLB-200 and ChatGPT for Low-Resource Language Translation
DataFunSummit
DataFunSummit
Jan 11, 2025 · Artificial Intelligence

Generative AI Applications, MLOps, and LLMOps: A Comprehensive Overview

This article presents a detailed overview of generative AI lifecycle management, covering practical use cases such as email summarization, the roles of providers, fine‑tuners and consumers, MLOps/LLMOps processes, retrieval‑augmented generation, efficient fine‑tuning methods like PEFT, and Amazon Bedrock services for model deployment and monitoring.

Amazon BedrockLLMOpsPEFT
0 likes · 14 min read
Generative AI Applications, MLOps, and LLMOps: A Comprehensive Overview
58 Tech
58 Tech
Jun 3, 2024 · Artificial Intelligence

Parameter-Efficient Fine-Tuning (PEFT) Methods for Large Language Models: LoRA, QLoRA, AdaLoRA, SoRA, and Training Acceleration with Unsloth

This article systematically analyzes popular parameter‑efficient fine‑tuning (PEFT) techniques for large language models—including Adapter Tuning, Prefix Tuning, LoRA, QLoRA, AdaLoRA, and SoRA—detailing their principles, implementation code, experimental results on NLU tasks, and practical acceleration using the Unsloth library.

AdaLoRALoRAPEFT
0 likes · 39 min read
Parameter-Efficient Fine-Tuning (PEFT) Methods for Large Language Models: LoRA, QLoRA, AdaLoRA, SoRA, and Training Acceleration with Unsloth