Tag

Positional Encoding

0 views collected around this technical thread.

Tencent Technical Engineering
Tencent Technical Engineering
Apr 16, 2025 · Artificial Intelligence

Understanding Transformer Architecture for Chinese‑English Translation: A Practical Guide

This practical guide walks through the full Transformer architecture for Chinese‑to‑English translation, detailing encoder‑decoder structure, tokenization and embeddings, batch handling with padding and masks, positional encodings, parallel teacher‑forcing, self‑ and multi‑head attention, and the complete forward and back‑propagation training steps.

EmbeddingPositional EncodingPyTorch
0 likes · 26 min read
Understanding Transformer Architecture for Chinese‑English Translation: A Practical Guide
AntTech
AntTech
Mar 4, 2025 · Artificial Intelligence

GraphCLIP and 2D‑TPE: Enhancing Transferability of Graph Models and Table Understanding for Large Language Models

This article introduces GraphCLIP, a self‑supervised graph‑summary pre‑training framework that boosts zero‑ and few‑shot transferability of graph foundation models for text‑attributed graphs, and 2D‑TPE, a two‑dimensional positional encoding method that preserves table structure to markedly improve large language model performance on table‑understanding tasks, while also announcing a live paper session at WWW 2025 featuring the authors.

Graph Neural NetworksLarge Language ModelsPositional Encoding
0 likes · 6 min read
GraphCLIP and 2D‑TPE: Enhancing Transferability of Graph Models and Table Understanding for Large Language Models
JD Tech
JD Tech
Jun 7, 2024 · Artificial Intelligence

Understanding Attention Mechanisms, Self‑Attention, and Multi‑Head Attention in Transformers

This article explains the fundamentals of attention mechanisms, including biological inspiration, the evolution from early visual attention to modern self‑attention in Transformers, details the scaled dot‑product calculations, positional encoding, and multi‑head attention, illustrating how these concepts enable efficient parallel processing of sequence data.

AIAttentionPositional Encoding
0 likes · 12 min read
Understanding Attention Mechanisms, Self‑Attention, and Multi‑Head Attention in Transformers
DaTaobao Tech
DaTaobao Tech
Sep 11, 2023 · Artificial Intelligence

Large Language Model Upgrade Paths and Architecture Selection

This article analyzes upgrade paths of major LLMs—ChatGLM, LLaMA, Baichuan—detailing performance, context length, and architectural changes, then examines essential capabilities, data cleaning, tokenizer and attention design, and offers practical guidance for balanced scaling and efficient model construction.

BaichuanChatGLMLLM architecture
0 likes · 32 min read
Large Language Model Upgrade Paths and Architecture Selection
Nightwalker Tech
Nightwalker Tech
Jul 18, 2023 · Artificial Intelligence

Implementing the Input Processing Layer of a Transformer Model: Tokenization, Embedding, and Positional Encoding

This article explains how to build the input processing stage of a Transformer—including tokenization with Hugging Face tokenizers, token‑to‑embedding conversion using BERT models, custom BPE tokenizers, and positional encoding—providing complete Python code examples and test results.

BPEPositional EncodingPyTorch
0 likes · 14 min read
Implementing the Input Processing Layer of a Transformer Model: Tokenization, Embedding, and Positional Encoding
Sohu Tech Products
Sohu Tech Products
Jan 9, 2019 · Artificial Intelligence

Understanding the Transformer Model: Attention, Self‑Attention, and Multi‑Head Mechanisms

This article provides a comprehensive, step‑by‑step explanation of the Transformer architecture, covering its encoder‑decoder structure, self‑attention, multi‑head attention, positional encoding, residual connections, and training processes, illustrated with diagrams and code snippets to aid readers new to neural machine translation.

Multi-Head AttentionPositional EncodingSelf-Attention
0 likes · 16 min read
Understanding the Transformer Model: Attention, Self‑Attention, and Multi‑Head Mechanisms