Mastering Seata AT Mode: A Step‑by‑Step Guide for Distributed Transactions
This article explains how to set up and use Seata's AT mode with Spring Boot, Spring Cloud, and Alibaba components, providing detailed configuration, code examples, database scripts, and testing procedures to achieve reliable distributed transaction management in microservice architectures.
Environment: SpringBoot 2.7.16, SpringCloud 2021.0.9, SpringCloud Alibaba 2021.0.4.0, Seata 1.6.1, JDK 17.
1. Introduction
Seata is an open‑source distributed transaction solution that offers AT, TCC, SAGA, and XA transaction modes, aiming to provide high performance and ease of use.
Seata AT Mode
AT mode is a non‑intrusive distributed transaction solution. Seata adds a proxy layer (DataSourceProxy) that intercepts database operations, automatically writes undo_log entries, checks global locks, and handles commit/rollback logic.
Overall Mechanism
Two‑phase commit evolution:
Phase 1: Business data and undo log are committed in the same local transaction, releasing local locks and connections.
Phase 2: Commit is asynchronous and fast. Rollback uses the undo log from phase 1 for compensation.
2. Practical Example
2.1 Project Structure
Modules:
tx-at-order : Order management module
tx-at-storage : Inventory management module
2.2 Dependency Management
All required dependencies are added to the parent pom.xml . Core dependencies include Spring Cloud Alibaba Nacos discovery, Nacos config, Spring Cloud bootstrap, and Seata Spring Boot starter.
<code><properties>
<java.version>17</java.version>
<spring-cloud.version>2021.0.9</spring-cloud.version>
<spring-cloud-alibaba.version>2021.0.4.0</spring-cloud-alibaba.version>
<seata.version>1.6.1</seata.version>
</properties>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-bootstrap</artifactId>
</dependency>
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
<version>${seata.version}</version>
</dependency>
<!-- OpenFeign remote call dependency (exclude default Seata version) -->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
<exclusions>
<exclusion>
<groupId>io.seata</groupId>
<artifactId>seata-all</artifactId>
</exclusion>
<exclusion>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
</exclusion>
</exclusions>
</dependency></code>2.3 Common Configuration for Both Modules
<code>spring:
cloud:
nacos:
server-addr: localhost:8848
username: nacos
password: xxxooo
discovery:
enabled: true
group: cloudApp
config:
file-extension: yaml
group: txConfig
seata:
tx-service-group: dxGroup
data-source-proxy-mode: AT
enable-auto-data-source-proxy: true
config:
type: nacos
nacos:
server-addr: 127.0.0.1:8848
group: SEATA_GROUP
dataId: 'seataServer.properties'
namespace: 'c3c5d364-c9cb-460a-a4ed-2b38f251f1fe'
username: 'nacos'
password: 'xxxooo'
</code>2.4 Seata Server Configuration
The Seata server configuration file application.yml should contain the following:
<code>seata:
config:
type: nacos
nacos:
server-addr: 127.0.0.1:8848
namespace: c3c5d364-c9cb-460a-a4ed-2b38f251f1fe
group: SEATA_GROUP
username: nacos
password: xxxooo
data-id: seataServer.properties
registry:
type: nacos
nacos:
application: seata-server
server-addr: 127.0.0.1:8848
group: SEATA_GROUP
namespace: c3c5d364-c9cb-460a-a4ed-2b38f251f1fe
cluster: default
username: nacos
password: xxxooo
</code>2.5 Nacos Configuration for Seata
In Nacos, create a Data Id that matches the above configuration.
2.6 Inventory Module
<code>@Service
public class StorageService {
private final StorageRepository storageRepository;
public StorageService(StorageRepository storageRepository) {
this.storageRepository = storageRepository;
}
@Transactional
public void deductStorage(Long id, Integer count) {
this.storageRepository.updateStorage(id, count);
}
}
</code> <code>@GetMapping("/{id}/{count}")
public Object deductStorage(@PathVariable("id") Long id, @PathVariable("count") Integer count) {
this.storageService.deductStorage(id, count);
return "success";
}
</code>2.7 Order Module
<code>@FeignClient(name = "app-storage", fallbackFactory = StorageFallbackFactory.class)
public interface StorageClient {
@GetMapping("/storages/{id}/{count}")
public String deductStorage(@PathVariable("id") Long id, @PathVariable("count") Integer count);
}
</code> <code>@GlobalTransactional
public void save() {
Order order = new Order();
order.setSno("S001 - " + new Random().nextInt(100000));
order.setAmount(BigDecimal.valueOf(666.66));
this.orderRepository.save(order);
this.storageClient.deductStorage(1L, 10);
}
</code> <code>@GetMapping("/save")
public Object createOrder() {
this.orderService.save();
return "success";
}
</code>2.8 Database Scripts for DB Store Mode
<code>-- Global table
CREATE TABLE IF NOT EXISTS `global_table` (
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`status` TINYINT NOT NULL,
`application_id` VARCHAR(32),
`transaction_service_group` VARCHAR(32),
`transaction_name` VARCHAR(128),
`timeout` INT,
`begin_time` BIGINT,
`application_data` VARCHAR(2000),
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`xid`),
KEY `idx_status_gmt_modified` (`status`,`gmt_modified`),
KEY `idx_transaction_id` (`transaction_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
-- Branch table
CREATE TABLE IF NOT EXISTS `branch_table` (
`branch_id` BIGINT NOT NULL,
`xid` VARCHAR(128) NOT NULL,
`transaction_id` BIGINT,
`resource_group_id` VARCHAR(32),
`resource_id` VARCHAR(256),
`branch_type` VARCHAR(8),
`status` TINYINT,
`client_id` VARCHAR(64),
`application_data` VARCHAR(2000),
`gmt_create` DATETIME(6),
`gmt_modified` DATETIME(6),
PRIMARY KEY (`branch_id`),
KEY `idx_xid` (`xid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
-- Lock table
CREATE TABLE IF NOT EXISTS `lock_table` (
`row_key` VARCHAR(128) NOT NULL,
`xid` VARCHAR(128),
`transaction_id` BIGINT,
`branch_id` BIGINT NOT NULL,
`resource_id` VARCHAR(256),
`table_name` VARCHAR(32),
`pk` VARCHAR(36),
`status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
`gmt_create` DATETIME,
`gmt_modified` DATETIME,
PRIMARY KEY (`row_key`),
KEY `idx_status` (`status`),
KEY `idx_branch_id` (`branch_id`),
KEY `idx_xid` (`xid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
-- Distributed lock table
CREATE TABLE IF NOT EXISTS `distributed_lock` (
`lock_key` CHAR(20) NOT NULL,
`lock_value` VARCHAR(20) NOT NULL,
`expire` BIGINT,
PRIMARY KEY (`lock_key`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
</code>Execute the above scripts in each module's database, then create the undo_log table:
<code>CREATE TABLE `undo_log` (
`id` BIGINT NOT NULL AUTO_INCREMENT,
`branch_id` BIGINT NOT NULL,
`xid` VARCHAR(100) NOT NULL,
`context` VARCHAR(128) NOT NULL,
`rollback_info` LONGBLOB NOT NULL,
`log_status` INT NOT NULL,
`log_created` DATETIME NOT NULL,
`log_modified` DATETIME NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`),
KEY `ix_log_created` (`log_created`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8;
</code>2.9 Testing
Invoke the order service endpoint /save . The logs of both order and inventory services show the same XID, confirming that the distributed transaction is coordinated. Introduce an exception (e.g., divide by zero) after the remote call to trigger rollback. After restarting services and re‑invoking the endpoint, both services roll back successfully, as verified by the database state and log output.
All steps demonstrate a complete Seata AT‑mode implementation for distributed transaction management.
Spring Full-Stack Practical Cases
Full-stack Java development with Vue 2/3 front-end suite; hands-on examples and source code analysis for Spring, Spring Boot 2/3, and Spring Cloud.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.