Artificial Intelligence 24 min read

Intelligent Grading: Technical Exploration and Practice in AI‑Powered Education

This article presents a comprehensive overview of AI‑driven intelligent grading technologies, covering background, typical educational challenges, multimodal NLP solutions for essay, spelling and grammar correction, adaptive learning, and related research, illustrating how deep learning and multimodal models improve automated assessment across K‑12 scenarios.

DataFunSummit
DataFunSummit
DataFunSummit
Intelligent Grading: Technical Exploration and Practice in AI‑Powered Education

The talk, presented by senior researcher Li Chao from Tencent, introduces the background of intelligent education, tracing the evolution from traditional to online and AI‑enhanced learning, and highlighting the need for automated grading in various classroom scenarios such as lectures, homework, review, and examinations.

Typical problems in educational settings are identified, including the need for a knowledge‑graph‑based curriculum hierarchy, mapping questions to knowledge points, extracting difficulty levels, and aligning video resources with specific concepts.

A four‑layer technical architecture is described: (1) resource layer with expert‑built knowledge graphs and large question banks; (2) algorithm layer employing NLP tasks (semantic mining, text classification, question representation) and multimodal processing of images and audio; (3) engine layer providing automatic grading and adaptive‑learning engines; (4) application layer deploying these engines for test generation, homework feedback, diagnostics, tutoring, and exam scoring.

The core grading solutions are divided into subjective (essay) and objective (multiple‑choice, fill‑in‑the‑blank) tasks. For essays, a four‑level analysis is performed: word usage, sentence expression (rhetoric, style, description), discourse structure (paragraph organization, argument strength), and overall feedback (scoring consistency, comparative analysis across classes).

Spelling correction is tackled with a multimodal model that jointly encodes character glyphs, phonetic pinyin, and visual shape using a ResNet, GRU, and Transformer, achieving state‑of‑the‑art results on SIGHAN benchmarks.

Grammar error detection follows a sequence‑labeling approach similar to Grammarly, enhanced with dependency‑masking and model fusion to improve long‑range dependency handling.

Content understanding extends to rhetorical device detection, narrative element extraction, argument‑evidence identification, and style classification, using text‑classification and ranking methods.

Other writing tasks such as imitation, continuation, and picture‑based composition are discussed, emphasizing multimodal understanding of visual scenes and their alignment with generated text.

Objective question grading addresses textual answer consistency, numeric answer verification, and equation‑matching for application problems, highlighting challenges like multi‑solution answers and OCR errors.

The presentation concludes with a list of related papers covering dependency‑masked BERT, multimodal Chinese spell checking, essay scoring, argument‑structure detection, multimodal OCR correction, and a multimodal survey, followed by a Q&A session on essay analysis, data generation for spelling correction, and Chinese grammar error correction.

AINLPmultimodal learningeducation technologyEssay ScoringIntelligent GradingSpell Checking
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.