Information from the system, which Saxena had described at the 2014 Robotics: Science and Systems Conference in Berkeley, is being translated and stored in a robot-friendly format that robots will be able to draw on when needed.
The India-born, Indian Institute of Technology-Kanpur graduate, has now launched a website for the project at robobrain.me, which will display things the brain has learnt, and visitors will be able to make additions and corrections. Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. “Our laptops and cellphones have access to all the information we want.
If a robot encounters a situation it hasn"t seen before it can query Robo Brain in the cloud,” Saxena, assistant professor, Microsoft Faculty Fellow, and Sloan Fellow, at Cornell University, said in a statement.
Saxena and his colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, say Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behaviour.
His team includes Ashesh Jain, a third-year PhD computer science student at Cornell. Robo Brain employs what computer scientists call structured deep learning, where information is stored in many levels of abstraction.
Deep learning is a set of algorithms, or instruction steps for calculations, in machine learning. For instance, an easy chair is a member of a class of chairs, and going up another level, chairs are furniture.
Robo Brain knows that chairs are something you can sit on, but that a human can also sit on a stool, a bench or the lawn, the statement said.
A robot"s computer brain stores what it has learnt in a form that mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines—called nodes and edges.
The nodes could represent objects, actions or parts of an image, and each one is assigned a probability—how much you can vary it and still be correct.
In searching for knowledge, a robot"s brain makes its own chain and looks for one in the knowledge base that matches within those limits.
“The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries,” said Aditya Jami, a visiting researcher art Cornell, who designed the large database for the brain. Jami is also co-founder and chief technology officer at Predict Effect, Zoodig Inc. The basic skills of perception, planning and language understanding are critical for robots to perform tasks in the human environments. Robots need to perceive with sensors, and plan accordingly.
If a person wants to talk to a robot, for instance, the robot has to listen, get the context and knowledge of the environment, and plan its motion to execute the task accordingly.
For example, an industrial robot needs to detect objects to be manipulated, plan its motions and communicate with the human operator. A self-driving robot needs to detect objects on the road, plan where to drive and also communicate with the passenger.
Scientists at the lab at Cornell do not manually programme the robots. Instead, they take a machine learning approach by using variety of data and learning methods to train our robots.
“Our robots learn from watching (3D) images on the Internet, from observing people via cameras, from observing users playing video games, and from humans giving feedback to the robot,” the Cornell website reads.
There have been similar attempts to make computers understand context and learn from the Internet.
For instance, since January 2010, scientists at the Carnegie Mellon University (CMU) have been working to build a never-ending machine learning system that acquires the ability to extract structured information from unstructured Web pages.
If successful, the scientists say it will result in a knowledge base (or relational database) of structured information that mirrors the content of the Web. They call this system the never-ending language learner, or NELL.
NELL first attempts to read, or extract facts from text found in hundreds of millions of web pages (plays instrument). Second, it attempts to improve its reading competence, so that it can extract more facts from the Web, more accurately, the following day. So far, NELL has accumulated over 50 million candidate beliefs by reading the Web, and it is considering these at different levels of confidence, according to information on the CMU website.
“NELL has high confidence in 2,348,535 of these beliefs—these are displayed on this website. It is not perfect, but NELL is learning,” the website reads.
We also have IBM, or International Business Machines" Watson that beat Jeopardy players in 2011, and now has joined hands with the United Services Automobile Association (USAA) to help members of the military prepare for civilian life.
In January 2014, IBM said it will spend $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud”.
More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013, according to a 22 August blog in the Harvard Business Review.
Comments
Add new comment