[CVPR2018] Training AI with data from dogs, University of Washington develops AI system that simulates dog behavior


By New Zealot

Source: TechCrunch

Compiled by Xiao Qin

[New Wisdom Guide] Normal machine learning systems are built with a human perspective, but researchers at the University of Washington and the Allen Institute for Artificial Intelligence are attempting to train AI systems with behavioral data from dogs. The researchers collected movement data from an Eskimo dog through sensors and other devices and used it to train the AI system to achieve three goals: 1) act like a dog and predict future movements; 2) plan tasks like a dog; and 3) learn from dog behavior. The paper has been accepted for CVPR 2018. The point of this work is to understand visual data and allow the intelligences to act and perform tasks.

We have trained machine learning systems to recognize objects, perform navigation, or recognize facial expressions, but as difficult as it may be, machine learning has not even reached a level of sophistication that can be simulated, for example Simulating a dog . The purpose of this project, then, is to do just that - in a very limited way, of course. By observing the behavior of a very well-behaved dog, this AI learns the basics of how to act like a dog.

This is a collaborative study between the University of Washington and the Allen Institute for Artificial Intelligence, and the paper was published at CVPR in June of this year.

summary

We investigated how to directly model a visual intelligentsia(visually intelligent agent)。Computer vision typically focuses on solving problems related to visual intelligence Various related sub-tasks。 But we deviated from this standard computer vision approach; contrary, We tried to directly model a visual intelligence ofagent。 Our model takes visual information as input, and direct projectionsagent conduct。 accordingly, We introduced the DECADE dataset, This is a dataset of dog behavior collected from the dog's perspective。 Using these data, We can simulate the way the dog acts and plans its movements。 Under multiple metrics, For a given visual input, We have succeeded in making a positive impact onagent Modelling was carried out。 moreover, Compared to the representation trained for the image classification task, Our model learns representations that encode different information, Can also be extended to other areas。 in particular, By using this dog modeling task as a representation learning, We predict on walkable surfaces(walkable surface estimation) and very good results were obtained in the scene classification task。

Making sense of visual data: imitating dogs, learning dogs

Why did you do this study? While there has been much work on sub-tasks that simulate perception, such as identifying an object and picking it up, " Understand visual data to the point where the agent can act and perform tasks in the visual world ", there are few such studies. In other words, instead of simulating the behavior of the eye, it simulates the subject that controls the eye.

So why choose a dog? Because dogs are very complex intelligences, the researchers say:" Their goals and motivations are often unpredictable . "In other words, dogs are smart, but we don't know what they're thinking.

As an initial foray into this area of research, the team hopes to see if a system that can accurately predict these actions can be created by closely monitoring the dog's behavior and mapping the dog's movements and actions to the environment it sees.

Putting a set of sensors on an Eskimo dog all the time to collect data

To accomplish this, the researchers put a set of basic sensors on an Eskimo dog named Kelp M. Redmon. They put in Kelp's head. A GoPro camera. 6 inertial measurement units (on the legs, tail and body, respectively) to determine the position of the object. A microphone. and one that ties that data together Arduino Development Boards

They spent many hours recording the dog's activities - walking in different environments, fetching, playing at the dog park, eating - and synchronizing the dog's movements with the environment he saw. The result is the formation of a Dataset of Ego-Centric Actions in a Dog Environment with the dog itself as the perspective, referred to as DECADE dataset . Researchers used this dataset to train a new AI intelligences.

For this agent, some sort of sensory input is given - such as a view of a room or street, or a ball flying by - to predict what the dog will do in this situation. Of course, without going into particular detail, even just figuring out how its body moves and where it moves to is already a pretty important task.

Hessam Bagherinezhad of the University of Washington, one of the researchers, explained, "It learned how to move the joints in order to walk, learned how to walk or run again is to avoid obstacles. ""He learned to run after squirrels, follow his owner around, and chase flying dog toys (while playing Frisbee). These are some of the basic AI tasks in computer vision and robotics (e.g., motion planning, walkable surfaces, object detection, object tracking, person recognition) that we have been trying to solve by collecting separate data for each task.

The study posed three problems: (1) mimicking dog behavior: predicting a dog's next behavior based on images of the dog's previous behavior; (2) planning actions like a dog; and (3) learning from the dog's behavior: for example, predicting an area available for walking.

These tasks can produce some fairly complex data: for example, the dog model must know, just like a real dog, where it can walk when it needs to move from one location to another. It can't walk on trees or cars, or on couches (depending on the house). Thus, this model also learns this, and it can be deployed as a computer vision model on its own for finding out where a pet (or a footed robot) can be reached in a given image.

Model architecture for predicting dog behavior

A model architecture for learning how dogs plan

Model architecture for predicting walkable areas

The researchers say this is only a preliminary experiment, and while it has been successful, the results are limited. Follow-up studies may consider introducing additional senses (such as smell) or seeing how a model for one dog (or many dogs) can be generalized to other dogs. They conclude, "We hope that this work provides us with a better understanding of visual intelligence and the rest of the world that lives in our intelligent organism Pave the way. "

Address of the paper: https://arxiv.org/pdf/1803.10827.pdf


Recommended>>
1、Several common ways of event binding
2、The latest version of mathematica11 Chinese version is officially released
3、How to make a fullblown alpha dog
4、How to add spaces in Javaproperties files
5、HTML5 Design Principles Medium

    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送

    已发送

    朋友将在看一看看到

    确定
    分享你的想法...
    取消

    分享想法到看一看

    确定
    最多200字,当前共

    发送中

    网络异常,请稍后重试

    微信扫一扫
    关注该公众号