David Ku, the Corporate Vice President and Chief Technology Officer of AI & Research announced in a blog post the obtaining of Semantic Machines, Berkeley, California-based conversational AI company. The processing technology used is natural language developed by Semantic Machines which will be integrated into the Azure Bot Service, and Cortana, various Microsoft’s products.
Semantic Machines on their website stated that the existing systems that use natural language like Microsoft Cortana, Apple Siri, and Google Now only comprehend commands, but not conversations. However, Semantic Machines' technology can also understand conversations and not just commands. Some of the most commonly used commands which are handled by the digital assistants today include creating reminders, weather reports, setting up timer, and music controls. Ku said, “For an enriching communication to take place, assistants need to be able to communicate through a natural dialogue instead of just responding to the commands.”
Daniel Roth, the CEO and a co-founder of Semantic Machines, is an entrepreneur in serial technology, who also launched Voice Signal Technologies (bought in 2007 for $300 Million by Nuance Communications) and Shaser BioScience (acquired in 2012 for $100 Million by Spectrum Brands). Damon Pender, the CFO, and co-founder of Semantic Machines, previously worked as the CFO of Shaser BioScience, TeraDiode, and NeoSaej. Larry Gillick, the CTO and co-founder of Semantic Machines, had worked as the Vice President of Core Technology at Voice Signal Technologies, the Vice President of Research at Dragon Systems, the Vice President of Research for mobile devices at Nuance and Chief Speech Scientist for Siri at Apple. Semantic Machines Chief Scientist, co-founder, and VP of Research, Dan Klein is a full-time professor of computer science at UC Berkeley and previously had work experience as Chief Scientist at Adap.tv.
Conversation Engine is one of the core products of Semantic Machines. The extraction of semantic intent from natural input such as text or voice is done by the Conversation Engine which then produces a learning framework that's self-updated for managing state, dialog context, prominence and the objectives of the end users. The Natural language generation (NLG) technology used in the Conversation engine formulates communication with the user based on dialog context. Ku added that machine learning is used by Semantic Machines to give access to users for interacting, and discovering services and information naturally, and with significantly less effort.
Microsoft became the first company to add full-duplex voice sense to a conversational AI system for users to have a natural conversation with Cortana and XiaoIce. XiaoIce has reached about 30 billion conversations with an average of up to 30 minutes each with 200 million users across China, Japan, Indonesia, India, and the U.S. “Combining Semantic Machines’ technology with Microsoft’s own AI advances, we aim to deliver powerful, natural and more productive user experiences that will take conversational computing to a new level,” explained Ku in the blog post.
As Microsoft has been focusing on the development of natural language technology and speech recognition for over two decades, the company has an aim to expand its vision where computers around the world can hear, talk, see and understand like humans. This led to the initiation of the Cognitive Services framework in 2016. The Cognitive Services framework rotates around developing of bots and the merging of natural language understanding and speech recognition into intelligent assistants. There are now about more than 1 million developers currently using Microsoft Cognitive Services and about 300,000 developers using the Azure Bot Service.
You may also like to read: