IBM Demonstrates AI‑Powered Conversational Robot with Gestures at RoboBusiness
IBM showcased a humanoid robot that combines speech, intonation, and body language using Watson‑derived AI techniques, highlighting research on multimodal communication, early‑stage experimentation, and the broader rollout of cognitive‑computing APIs for developers.
Robert High, chief technology officer of Watson, talking to a robot on stage at RoboBusiness.
IBM is applying artificial‑intelligence methods originally created for Watson to teach robots to better understand and imitate human communication.
During a keynote at the RoboBusiness conference in San Jose, California, Robert High demonstrated these techniques using a small humanoid robot.
The robot, an Aldebaran Nao model, spoke with realistic intonation, made appropriate hand gestures, and even displayed impatience and sarcasm, such as looking at an imagined watch when urged to hurry.
High explained to MIT Technology Review that the interaction was prerecorded because the system struggles in noisy settings, but the demo reflects real research; his team uses machine‑learning algorithms on video footage to associate gestures and intonation with specific phrases, emphasizing that language alone is insufficient for human‑like communication.
“We augment the words with physical gestures to clarify this and that,” High said, noting that robots can incorporate gesturing, body language, eye movement, and subtle cues to reinforce the meaning of spoken language.
Robot interaction is becoming increasingly important as industrial robots move into collaborative environments such as stores, offices, and homes.
High added that the project is still in an early stage, with ongoing experiments to determine what is feasible, useful, and economically interesting.
IBM originally used a range of AI techniques to build Watson, which famously won the game show Jeopardy! by mining vast amounts of information and extracting meaning from text.
This effort has expanded into a broader “cognitive computing” initiative that incorporates many AI approaches, and IBM now offers many of the resulting machine‑learning capabilities to developers via an online API.
Some robot manufacturers are already testing these APIs to give their products the ability to answer spoken queries in varied forms and retrieve useful information.
Architects Research Society
A daily treasure trove for architects, expanding your view and depth. We share enterprise, business, application, data, technology, and security architecture, discuss frameworks, planning, governance, standards, and implementation, and explore emerging styles such as microservices, event‑driven, micro‑frontend, big data, data warehousing, IoT, and AI architecture.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.