Human‑Machine Symbiosis and AI Ethics: Insights from Professor Xiao Yanghua and CEO Han Bicheng
In a 2024 Inclusion·Bund conference talk, Professor Xiao Yanghua and CEO Han Bicheng discuss how AI and brain‑computer interfaces will reshape human identity, social relations, and ethics, emphasizing human‑centered guidelines, technology inclusiveness, and proactive governance to prevent misuse and preserve humanity.
Technology development should not overlook human care. 2024 Inclusion·Bund Conference. 24‑year‑old one‑armed boy Zhou Jian performed on piano with an intelligent bionic hand, creating a warm moment on stage.
Subsequently, Professor Xiao Yanghua of Fudan University’s School of Computer Science and Director of the Shanghai Key Laboratory of Data Science, together with Han Bicheng, founder and CEO of Zhejiang Qiangnao Technology, held a dialogue on the future of human‑machine symbiosis.
They believe that the large‑scale application of AI and related technologies will reshape the essence of humanity and, consequently, our social relationships. In the face of AI ethics and other risks, they call for the early establishment of AI application guidelines that put people at the core.
According to Xiao, as brain‑computer interfaces and AI become widespread, machines will serve as an external brain and limbs for humans, turning people into Nietzsche’s “over‑man” who transcends themselves with the aid of intelligent tools.
He adds that AI will become a proxy for humans in various productions and activities, and that relationships among humans, between humans and machines, and between machines themselves will inevitably become part of our study of social relations.
Han sees brain‑computer interface technology evolving over the next 5‑10 years in three stages: repair, enhancement, and higher‑order interaction.
Repair aims to help those with brain disorders or limb disabilities regain normal life. Enhancement could, for example, supplement the cognition of the elderly, extending their functional years. The next generation of interaction may move beyond language, allowing thoughts to be transmitted directly.
While new technologies bring many possibilities, they also raise significant ethical concerns.
AI’s massive deployment presents four major challenges. The foremost is how our social superstructure—production relations and other institutions—can adapt to the rapid advancement of AI‑driven productive forces. Generative AI evolves on a monthly basis, yet human emotions, values, and ethics adjust far more slowly, creating a tension that must be addressed.
The second challenge is technological inclusiveness: preventing a small group from gaining undue competitive advantage through advanced tech, and guarding against technology addiction and potential backlash.
AI can “create humans” or even “super‑humans.” While AI already assists the visually impaired and physically disabled, it also enables abilities beyond normal human limits, prompting careful consideration of associated risks.
Looking ahead, they stress that AI applications must be human‑centered, returning technology to its original purpose of serving people, and exercising extreme caution with anything that harms human nature.
Xiao warns that large‑scale misuse of AI could damage the very essence of being human; therefore, proactive technology governance is essential to anticipate and mitigate such threats.
Han emphasizes the importance of AI ethics committees and argues that when transformative technology emerges, it should first be used to help those who are most in need, paying special attention to the “slow walkers.”
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.