Quickly Add NLP to Node Apps with Hugging Face Transformers.js
This tutorial shows how to integrate Hugging Face's open‑source Transformers.js library into Node.js projects, covering setup, the Pipeline API, and practical code examples for sentiment analysis, zero‑shot classification, text generation, translation, and question answering, while also discussing when to prefer Python alternatives.
Hugging Face is an open‑source platform where engineers can collaborate on large language models (LLMs), datasets and applications. Unlike proprietary models that require commercial licences, Hugging Face focuses on open‑source models that are smaller and can run locally.
Hugging Face also offers hosted inference services that you can pay for per request. Applications interact with an LLM through an inference endpoint just like any other API.
This article concentrates on the open‑source libraries maintained by Hugging Face, especially the Transformers library, and shows how little code is needed to add natural‑language‑processing (NLP) capabilities to a Node application.
Because the author prefers software development over data‑science, the JavaScript library Transformers.js is used, but equivalent Python code is also possible.
Prerequisites
The default LLM model is used, so no Hugging Face account or API token is required. The library can run directly in the browser, but the examples run on a Node server.
Node (v22.14.0 or any recent LTS) must be installed.
Initialize a new project and install the library:
mkdir my-project
cd my-project
npm init
npm i @huggingface/transformersMake sure package.json contains "type": "module" to enable ECMAScript modules.
Pipeline API
The simplest way to interact with a local LLM is the Pipeline API.
Import the pipeline function:
import { pipeline } from '@huggingface/transformers';Instantiate a pipeline for a specific task (e.g., sentiment analysis):
const sentimentAnalysis = await pipeline('sentiment-analysis');Call the pipeline with input text to obtain the result:
const out = await sentimentAnalysis('I love this product!');The Pipeline API supports many tasks; the article demonstrates the following:
Sentiment analysis : classifies input as positive, negative or neutral.
Zero‑shot classification : classifies input into a user‑provided label list without examples.
Text generation : generates text from a prompt.
Translation : translates input text into a target language.
Question answering : extracts an answer from supplied context.
Example code
All examples download the model to the local file system. The first run may take a few minutes (model size 2‑3 GB); subsequent runs are much faster.
Zero‑shot classification
import { pipeline } from '@huggingface/transformers';
async function runClassifier() {
const classifier = await pipeline('zero-shot-classification', 'Xenova/nli-deberta-v3-xsmall');
const classes = ['technical support', 'complaint', 'inquiry', 'billing'];
const result = await classifier('My television is not working, and I need to organise a repair.', classes);
console.log(JSON.stringify(result));
const result2 = await classifier('My credit card recently expired and my subscription is due to be paid soon.', classes);
console.log(JSON.stringify(result2));
}
await runClassifier();Run with node run-classifier.js. The output lists labels with scores sorted descending.
Text generation
import { pipeline } from '@huggingface/transformers';
async function runGenerator() {
const generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
const result = await generator('Write a haiku about large learning models.', {
max_new_tokens: 200,
temperature: 0,
repetition_penalty: 2.0,
no_repeat_ngram_size: 3,
});
console.log(result[0].generated_text);
}
await runGenerator();Running node run-generator.js prints a one‑line haiku.
Translation
import { pipeline } from '@huggingface/transformers';
async function runTranslator() {
const translator = await pipeline('translation', 'Xenova/nllb-200-distilled-600M');
const targetLanguage = "deu_Latn";
const english = 'These Pretzels are making me thirsty.';
const translated = await translator(english, { src_lang: 'eng_Latn', tgt_lang: targetLanguage });
const backToEnglish = await translator(translated[0].translation_text, { src_lang: targetLanguage, tgt_lang: 'eng_Latn' });
console.log(`Original: ${english}`);
console.log(`Translated: ${translated[0].translation_text}`);
console.log(`Translated back to English: ${backToEnglish[0].translation_text}`);
}
await runTranslator();The example shows correct German translation but a loss of the word “Pretzel” when translating back to English.
Question answering
import { pipeline } from '@huggingface/transformers';
import fs from 'fs';
async function runQuestions() {
const knowledge = fs.readFileSync('knowledge.txt', 'utf8');
const qa = await pipeline('question-answering', 'Xenova/distilbert-base-cased-distilled-squad');
const question = 'What is the name of the band founded in Sydney in 1973?';
const result = await qa(question, knowledge);
console.log(`Question: ${question}`);
console.log(`Answer: ${result.answer} (score: ${result.score})`);
}
await runQuestions();The script correctly returns “ACDC”.
Using a generator for QA
import { pipeline } from '@huggingface/transformers';
import fs from 'fs';
async function runQuestionsWithGenerator() {
const knowledge = fs.readFileSync('knowledge.txt', 'utf8');
const generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
const question = 'Who is the lead guitarist of AC/DC?';
const result = await generator(`Context: ${knowledge} Question: ${question}`, {
max_new_tokens: 200,
temperature: 0,
repetition_penalty: 2.0,
no_repeat_ngram_size: 3,
});
console.log(result[0].generated_text);
}
await runQuestionsWithGenerator();This returns “Angus is the lead guitarist of AC/DC”.
Summary
The examples demonstrate that integrating AI/NLP functionality from Hugging Face into a Node program requires only a few lines of JavaScript. A JavaScript library is advantageous when developers are comfortable with Node, TypeScript or JavaScript but not with Python, allowing the entire application stack to remain in Node.
Before deploying, consider model size, inference cost, licensing, and whether a Python library might be preferable for advanced features such as retrieval‑augmented generation (RAG) or tool‑calling, which are currently better supported in Python.
Why you might choose Python
Python Transformers provide richer documentation, built‑in support for RAG, tool‑calling, and fine‑tuning APIs that are not yet available in the JavaScript ecosystem. For large knowledge bases or complex pipelines, Python may reduce implementation effort.
Code Mala Tang
Read source code together, write articles together, and enjoy spicy hot pot together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
