cool hit counter Berkeley AI research: FaSTrack - a tool to ensure safe navigation of dynamic systems_Intefrankly

Berkeley AI research: FaSTrack - a tool to ensure safe navigation of dynamic systems


AiTechYun

Edited by Yining

Watch first → https://www.youtube.com/watch?v=KcJJOI2TYJA

Problem: Fast and safe campaign planning

Real-time autonomous motion planning and navigation is difficult, especially if the premise is in the availability of safety. This becomes more difficult when there are complex dynamic systems, external disturbances (e.g. wind) and unknown environments. In this work, our goal is to make existing real-time motion planners robust to ensure safety during navigation in dynamic systems.

In control theory, there are techniques like Hamilton-Jacobi reachability analysis that provide strict safety guarantees on the behavior of the system, as well as optimal controllers for reaching a given goal (see Figure 1). However, in general, the computational methods used in the Hamilton-Jacobi accessibility analysis can only be used for decomposable and/or low-dimensional systems ; This is due to the "curse of dimensionality". This means that we cannot handle secure trajectories for systems with more than two dimensions. Since most real-world system models, such as cars, aircraft and quadcopters, have more than two dimensions, these methods are often difficult to handle in reality.

Hamilton-Jacobi accessibility analysis: http://ieeexplore.ieee.org/abstract/document/1463302/

On the other hand, geometric motion planners like Rapid Traversal Random Tree (RRT) and Model Predictive Control (MPC) can perform planning in real time by using a simplified dynamic model of the system and/or a horizon for short-term planning (horizon). While this allows us to perform real-time motion planning, the final trajectory may be too simple, leading to inevitable collisions, and may even be dynamic and infeasible (see Figure 1). For example, imagine you're riding a bike and tracking pedestrians along the sidewalk. The path leads you to ride straight towards a tree, and then at the last second you want to avoid the tree with a 90 degree turn. But your bike can't make that big of a turn, so you'll hit a tree. In general, robotics experts moderate this problem by assuming that the obstacles are slightly larger than what is actually planned. This greatly increases the likelihood of non-collision, but still does not provide sufficient assurance and can lead to accidental collisions.

So, how do we combine the speed of fast planning with the safety assurance of slow planning?

Figure 1: On the left, we have a high-dimensional vehicle that routes through an obstacle to enter a target. Calculating the optimal safety orbit is a slow and sometimes tricky task, and replanning is almost impossible. On the right, we simplify our model of the vehicle (in this case, assuming it can move in a straight line over the point). This allows us to plan very quickly, but as we execute the planned trajectory, we may find that we can't really follow the path and end up in a collision.

Solution:FaSTrack

FaSTrack (Fast and Safe Tracking), which translates to "fast and safe tracking". FaSTrack is a tool that is essentially a fast motion planner like RRT or MPC, but maintains real-time performance at the same time. FaSTrack allows the user to implement fast motion planning in a simplified dynamic manner while maintaining safety in the form of pre-calculated boundaries, which is the maximum possible distance between the planner state and the actual autonomous system state. We call this distance the tracking error bound. This budget method also yields an optimal control lookup table (lookup table) that provides the optimal error feedback controller for the autonomous system, allowing it to perform online planning in real time.

Figure 2: FaSTrack aims to use a simplified model (blue) but precomputes a tracking error bound that captures all possible biases due to model mismatch and environmental perturbations (e.g., wind), as well as an error feedback controller within this bound. We can then increase our barrier by tracking the error bound, which ensures that our dynamic system (red) remains safe. Adding obstacles is not a new concept in the robotics community, but by using our tracking error bounds, we can consider the dynamics of the system and disturbances.

Off-line pre-calculation

We precompute this tracking error by viewing the problem as a kind of tracking behavior between the planner and the tracker. The planner uses a simplified model of a truly autonomous system, which is necessary for real-time planning; The tracker uses a more accurate model of a real autonomous system. We assume that the tracker (a truly autonomous system) is always catching up with the planner. We want to know what the maximum relative distance (i.e., the maximum tracking error) is in the worst-case scenario: when the planner is actively trying to avoid the tracker. If there is an upper bound on this, then we know the maximum tracking error that can occur at runtime.

Figure 3: Tracking system, using complex real system dynamics tracking, planned in a very simple model.

Because we are concerned with maximum tracking error. Therefore, to solve this problem, we must first determine the relative dynamics between the two systems by fixing the planner at the origin and determining the dynamics of the tracker with respect to the planner. We then specify the cost function as the distance from that origin, i.e., the relative distance of the tracker, as shown in Figure 4. This tracker will try to minimize this cost, and the planner tries to maximize it. As we evolve these optimal trajectories over time, we capture the highest costs that occur over that time. If the tracker always eventually catches up with the planner, then this cost will always be concentrated on the fixed cost (fixed cost).

The minimum invariant level set of the aggregated value function provides the judgment of the tracking error bound, as shown in Figure 5. In addition, the gradient of the aggregated value function creates an optimal error feedback control policy so that the tracker can catch up with the planner. We used Ian Mitchell's level set toolbox and reachability analysis to solve this differential response (differential game). For a more detailed explanation of optimization, see our recent paper on decision making and control presented at the 2017 IEEE conference.

Ian Mitchell's Level Set Toolbox: http://www.cs.ubc.ca/~mitchell/ToolboxLS/

Papers: https://arxiv.org/abs/1703.07373

Figure 4

Figure 5

In Figure 4, we show the initialization of the value function on the cost function (at a distance from the origin) and the expansion according to the change in differential response. In Figure 5, we should perform 3D and 2D slicing of this value function. Each slice can be considered as a "candidate" tracking error bound". Over time, some of these boundaries become impossible to continue to exist. The smallest invariant level set of the aggregated value function provides us with the tightest, feasible bound on the tracking error.

Online real-time planning

In the online phase, we perceive obstacles at a given perceptual range and imagine tracking errors with a Minkowski sum to extend these obstacles. Using these filled obstacles, the motion planner determines its next desired state. Based on the relative states between the tracker and the planner, the optimal control of the tracker (autonomous system) is determined by a lookup table. The autonomic system performs optimal control and keeps repeating the works until the goal is reached. This means that the motion planner can continue to make plans quickly, and that control is ensured by simply adding obstacles and using look-up tables!

Figure 6: A 10D approximate hovering quadrocopter (quadroter) model (blue line) simulated with MATLAB "tracking" a 3D planning model being planned using RRT (green dots). As new obstacles are identified (shifting to red), the RRT plans to develop a new path for the goal. Based on the relative states between planning and autonomous systems, the optimal control can be found by a lookup table. Even if the RRT planner suddenly turns, we are guaranteed to be in the tracking error bound (blue box).

Reducing conservatism through metaplanning

One consequence of developing security tracking issues between planners and trackers is that the resulting security tracking is usually quite conservative. That is, if the planner is always allowed to do the worst possible behavior, the tracker is not guaranteed to be close to the planner. One solution is to use multiple planning models, each with its own tracking error. The resulting "meta-plan" consists of trajectory segments computed by each planner, each of which tracks the planner-generated trajectory with an appropriate optimal controller. This is illustrated in Figure 7, where the blue error bound corresponds to a planner that can move fast, and the small red bound corresponds to a slower moving planner.

Figure 7: By considering two different planners, each with a different tracking error bound, our algorithm is able to find a guaranteed, safe "meta-planning" that prefers the less accurate but faster blue planner, but which reverts to the more accurate but slower red planner in the vicinity of the obstacle. This gives rise to a natural, intuitive behavior where the optimal tradeoff is to relate the conservatism of the planner to the maneuvering speed of the vehicle.

Safe conversion

The key to making this work is to ensure that all transitions between planners are secure. This may be a bit complicated, but the main idea is that the transition between two planners (called A and B) is safe if we can guarantee that the invariant set computed as A is contained in the invariant set computed as B. This is true for many pairs of planners, such as in Figure 7, where the transition from the blue boundary to the red boundary is made. But that is not usually the case. In general, we need to solve a dynamic countermeasure very similar to that original one in FaSTrack, but what we want to know is the state of the set we will never leave and from which we can guarantee that we end up inside the invariant set of B. Typically, the resulting safety transition bound (SSB) is larger than the tracking error bound (TEB), as shown below.

Figure 8: For a transition between a large tracking error bound and a small tracking error bound planner, the safety switch is typically larger than the tracking error bound, as shown in Fig.

Efficient online metaplanning

To do this effectively, we use a modified version of the classical RRT algorithm. Typically, RRT works by sampling points in the state space and connecting them to line segments to form a tree rooted at the starting point. In our example, we replace the line segments with the actual trajectories produced by the individual planners. In order to find the shortest path to the goal, we prefer those planners that can act faster, try them first, and if the faster one fails, they will only resort to the slow-moving one.

However, we must be careful to ensure that the safe conversion boundary is satisfactory. This is especially important in cases where the meta-planner decides to transition to a more accurate, slower moving planner, as in the example above. In this case, we implement a single-step virtual backtracking algorithm in which the use of a transition controller is able to ensure that the previous trajectory segment is collision-free.


Recommended>>
1、Passing values in JavaScript and ASPNET
2、Python implementation of the stack
3、Database internal sorting algorithm of twostage multiway subsort algorithm implementation
4、Java performance optimization of string optimization processing
5、The crisis hidden under the AI do you see it

    已推荐到看一看 和朋友分享想法
    最多200字,当前共 发送

    已发送

    朋友将在看一看看到

    确定
    分享你的想法...
    取消

    分享想法到看一看

    确定
    最多200字,当前共

    发送中

    网络异常,请稍后重试

    微信扫一扫
    关注该公众号