Artificial Intelligence 12 min read

Exploring Alibaba’s Tongyi Qianwen AI Model, SWOT, Recipe Demo, and Code Samples for Spark Same‑Period Analysis and Java Bubble Sort

The article reviews Alibaba’s Tongyi Qianwen large‑language model, shares a cooking recipe generated by the AI, presents a SWOT analysis, and provides code examples—including a Spark Scala script for same‑period month‑over‑month calculations and a Java bubble‑sort implementation.

Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Exploring Alibaba’s Tongyi Qianwen AI Model, SWOT, Recipe Demo, and Code Samples for Spark Same‑Period Analysis and Java Bubble Sort

Alibaba announced its large‑language model "Tongyi Qianwen" at the Cloud Summit on April 11. The model, developed since 2019 by Alibaba DAMO Academy, is slated to be integrated into all Alibaba products, with DingTalk and Tmall Genie being the first to test the new features.

A test address (https://tongyi.aliyun.com/) is provided for users to try the model. The author shares personal impressions, noting that the UI differs from Baidu's Wenxin and ChatGPT by offering topic‑focused modules and a "Treasure Bag" of features.

The article includes a sample AI‑generated recipe for "Fish‑Fragrant Shredded Pork," listing ingredients and step‑by‑step instructions in ordered lists.

A SWOT analysis of the AI model is presented:

Strengths: fast text generation, multilingual support, personalized content.

Weaknesses: requires detailed input, risk of similarity to existing texts, potential infringement.

Opportunities: growing AI adoption, productivity gains for enterprises, higher customer satisfaction.

Threats: job displacement, reduced creativity, legal disputes over generated content.

The author then demonstrates the model's coding capabilities with two code examples.

First, a Spark Scala script intended to calculate same‑period month‑over‑month (同环比) values. The initial attempt shows a DataFrame creation, window definition, and aggregation, but it does not actually compute the ratio.

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql._

val dateColumn = Seq("date").toDF("date")
val timeWindow = dateColumn.withColumn("end_date", "date").over(
  Window.partitionBy("date").orderBy("date")
)

val df = spark.createDataFrame(Seq(
  ("2019-01-01"), ("2019-01-02"), ("2019-01-03"), ("2019-01-04"),
  ("2019-01-05"), ("2019-01-06"), ("2019-01-07"), ("2019-01-08"),
  ("2019-01-09"), ("2019-01-10"), ("2019-01-11"), ("2019-01-12"),
  ("2019-01-13"), ("2019-01-14"), ("2019-01-15"), ("2019-01-16"),
  ("2019-01-17"), ("2019-01-18"), ("2019-01-19"), ("2019-01-20"),
  ("2019-01-21"), ("2019-01-22"), ("2019-01-23"), ("2019-01-24"),
  ("2019-01-25"), ("2019-01-26"), ("2019-01-27"), ("2019-01-28"),
  ("2019-01-29"), ("2019-01-30"), ("2019-01-31")
), Seq("id", "date"))

val resultDF = df.groupBy(dateColumn, "id").agg(first("date"), avg("value").as("value"))
resultDF.show()

After feedback that the ratio was missing, a corrected version adds an index column and joins the DataFrames to compute the same‑period comparison.

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.types._

val dateColumn = Seq("date").toDF("date")
val timeWindow = dateColumn.withColumn("end_date", "date").over(
  Window.partitionBy("date").orderBy("date")
)

val df = spark.createDataFrame(Seq(
  ("2019-01-01"), ("2019-01-02"), ("2019-01-03"), ("2019-01-04"),
  ("2019-01-05"), ("2019-01-06"), ("2019-01-07"), ("2019-01-08"),
  ("2019-01-09"), ("2019-01-10"), ("2019-01-11"), ("2019-01-12"),
  ("2019-01-13"), ("2019-01-14"), ("2019-01-15"), ("2019-01-16"),
  ("2019-01-17"), ("2019-01-18"), ("2019-01-19"), ("2019-01-20"),
  ("2019-01-21"), ("2019-01-22"), ("2019-01-23"), ("2019-01-24"),
  ("2019-01-25"), ("2019-01-26"), ("2019-01-27"), ("2019-01-28"),
  ("2019-01-29"), ("2019-01-30"), ("2019-01-31")
), Seq("id", "date"))

val resultDF = df.groupBy(dateColumn, "id").agg(first("date"), avg("value").as("value"))

val idxWindow = Seq("date", "end_date").toDF("idx", "end_date")
val resultDF2 = resultDF.join(idxWindow, "left_id", "right_id", "date")
resultDF2.show()

Finally, a Java example demonstrates a simple bubble‑sort algorithm, illustrating basic double‑loop logic for ordering an integer array.

public static void bubbleSort(int[] nums) {
    int n = nums.length;
    for (int i = 0; i < n - 1; i++) {
        for (int j = 0; j < n - i - 1; j++) {
            if (nums[j] > nums[j + 1]) {
                int temp = nums[j];
                nums[j] = nums[j + 1];
                nums[j + 1] = temp;
            }
        }
    }
}

The author concludes that the overall experience with the AI model is acceptable, encourages readers to follow the "1点东西" WeChat public account for more AI resources, and notes that the model does not support image generation.

JavaAIlarge language modelSparkScalabubble sortSWOT
Rare Earth Juejin Tech Community
Written by

Rare Earth Juejin Tech Community

Juejin, a tech community that helps developers grow.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.