Artificial Intelligence 15 min read

How Machines Learn: From Newton’s Second Law to the Core Steps of Supervised Learning

This article illustrates how a machine can rediscover Newton’s second law by treating force and acceleration data as a simple linear regression problem, detailing the three fundamental steps of hypothesis space definition, loss function design, and optimization through calculus or gradient methods.

DataFunTalk
DataFunTalk
DataFunTalk
How Machines Learn: From Newton’s Second Law to the Core Steps of Supervised Learning

Course: Machine Learning Storytelling Instructor: Bi Ran, Baidu Chief Architect Editor: Hoh Xil Source: Machine Learning Bootcamp Platform: Baidu Tech Academy, PaddlePaddle, DataFun

The lesson continues the machine‑learning series by showing how a machine could learn Newton’s second law (a = F/m) from five observed force‑acceleration data points.

Step 1 – Define a hypothesis space : Plot the five data points, notice they lie roughly on a line through the origin, and hypothesize a linear relationship a = w·F, where w is an unknown parameter.

Step 2 – Design a loss (evaluation) metric : Use the sum of squared differences between the observed accelerations and the predictions of the line (loss = Σ(y_i – w·F_i)²). Minimizing this loss yields the best‑fitting line.

Step 3 – Find the optimal hypothesis : Treat loss as a function of w, differentiate, set the derivative to zero, and solve for w. The solution w = 1/m reproduces Newton’s second law.

The three core components—hypothesis space, loss function, and solving algorithm—constitute the generic framework of supervised learning; different choices lead to the myriad of machine‑learning models.

Future lessons will explore richer, non‑linear hypothesis spaces, more sophisticated loss designs, and advanced optimization algorithms.

Hypothesis Space

Loss Function

Optimization Algorithm

Finally, the article reminds readers that the learned knowledge is represented by the parameter w, which in this example equals the reciprocal of the object’s mass, exactly matching the physical law.

For more content, follow DataFun and watch the next episode on the value of big data.

Optimizationmachine learningloss functionsupervised learninghypothesis spaceNewton's law
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.