IBM unveiled today an experimental platform that allows developers to embed Watson functions and cognitive technology into various devices. The platform, dubbed Project Intu, can be accessed through the Watson Developer Cloud, Intu Gateway, and GitHub.
More specifically, the goal is to simplify the process for developers that want to create cognitive experiences in robots or other Internet of Things (IoT) devices. These cognitive experiences even extend into the physical world.
Developers can integrate Watson services like Conversation, Language, and Visual Recognition with the capabilities of the device to act out an interaction with the user. For example, it allows devices to perform everyday tasks like helping a customer in a retail setting or greeting someone at a hotel door in a natural way.
Project Intu removes the need for developers to program each individual capability into a device. The platform provides developers with an environment that is made to build cognitive experiences on a variety of operating systems, such as Raspberry Pi, MacOS, Windows, and Linux.
Normally a developer would have to figure out how to integrate different cognitive services into an end-user experience, whereas Project Intu is a ready-made platform for doing so.
Project Intu reflects IBM’s ongoing work in embodied cognition. The idea behind making the project available as an experimental offering is to help the company refine the technology.
Watson and IBM’s cloud seem to be a big part of the company’s future plans. In its third quarter, IBM’s strategic initiatives, which include Watson and the cloud, reported $8 billion in revenues, up 15 percent from the same period last year. This accounted for more than 40 percent of the company’s total revenues.