海米是什么| 芳心是什么意思| 常流鼻血是什么原因| 上午8点是什么时辰| 冠心病吃什么药最有效| 光天化日什么意思| 什么养胃| 手机壳为什么会发黄| sec是什么意思| 用什么| 什么食物含钾| 身体缺钾是什么原因造成的| 二维是什么意思| 答辩是什么意思| 左下腹是什么部位| 清考是什么意思| wc的完整形式是什么| 尿潴留吃什么药| 无产阶级是什么意思| 啃老是什么意思| fpu是什么意思| 鹅什么时候开始下蛋| 下眼袋大是什么原因引起的| 鲜章是什么意思| 核磁是检查什么的| 炼乳是什么做的| 晚上很难入睡是什么原因| 双歧杆菌三联和四联有什么区别| 发烧想吐是什么原因| 三个又是什么字| 脸很黄是什么原因| 县长什么级别| 一直发烧不退是什么原因| 雪里红是什么菜| 检查前列腺做什么检查| 报销凭证是什么| 玉米什么时候成熟| 经常便秘是什么原因| 花红是什么意思| 手指肚发红是什么原因| dha什么时候吃最好| 天麻有什么作用与功效| 国潮是什么| 人为什么要洗澡| 什么的草坪| 蝉鸣是什么季节| 湿疹长什么样图片| 开平方是什么意思| 羞明畏光是什么意思| 得意忘形什么意思| 消化不良吃什么食物| 酸菜鱼一般用什么鱼| 医保自费是什么意思| 血友病是什么病| 2021年属什么| 便秘吃什么药快速排便| 林冲为什么叫豹子头| 男人要的归属感是什么| 肾阳虚的表现是什么| 包的部首是什么| 咸湿佬是什么意思| 防弹衣为什么能防弹| 心律失常是什么意思| 六月一号什么星座| 白细胞偏低是什么意思| 1964年属什么的| 你会不会突然的出现是什么歌| 碘酒和碘伏有什么区别| 1975年是什么年| 4.22是什么星座| 干净的近义词是什么| 09年属什么| hdl是什么意思| 刘伯温属什么生肖| 投诉医生打什么电话| 梦见胎死腹中预示什么| 不拘一格是什么意思| 标准员是干什么的| 540是什么意思| 小便多吃什么药好| au是什么意思| 姑息治疗什么意思| 肾的功能是什么| 什么是德行| 喉咙发炎吃什么水果好| 为什么近视不可逆| 大雄宝殿是什么意思| 8月7日是什么星座| 大忌什么意思| 血管狭窄吃什么食物好| green是什么颜色| 芽轴发育成什么| 学历证是什么| 强迫症什么意思| 黄仙是什么仙| ubras是什么牌子| 心胸狭窄是什么意思| 出虚恭是什么意思| 警察代表什么生肖| 韵母是什么| 8.9是什么星座| 清肺火肺热吃什么药最有效| 为什么会有肥胖纹| 黑鱼不能和什么一起吃| 风湿性关节炎吃什么药| 榴莲是什么味道| 调理内分泌失调吃什么药效果好| 小气道病变是什么意思| 血糖高了会有什么危害| 晒太阳对身体有什么好处| 3月31号什么星座| 乳腺实性结节是什么意思| 舌裂吃什么药| 苦瓜有什么好处| 脚趾头疼是什么原因| 安吉白茶属于什么茶类| 白带异常吃什么药| 包茎是什么意思| 七夕节干什么| 三季人是什么意思| 杏仁有什么营养| 布丁是用什么做的| 见红是什么样的| 什么动物| 珍馐是什么意思| 发热吃什么药| 联票是什么意思| 方脸适合什么耳环| 真露酒属于什么酒| 发达国家的标准是什么| 轮廓是什么意思| 河豚有毒为什么还吃| 何五行属性是什么| mpv是什么意思| 导盲犬一般是什么品种| 校正是什么意思| 妇科千金片主要治什么| 8月27是什么星座| 糖皮质激素是什么药| 声带白斑是什么病严重吗| 王字旁一个行念什么| 什么东西蛋白质最高| 化疗病人吃什么好| 脚肿了是什么原因| 乙肝是什么| 什么手机电池最耐用| 蛋白尿是什么意思| 种草什么意思| 小腿肌肉疼是什么原因| 词牌名是什么意思| 老妈子是什么意思| 牙齿发黄是什么原因| 五年存活率是什么意思| 孙耀威为什么被封杀| 孕妇吃什么水果比较好| 花裙子配什么上衣好看| 小孩子肚子疼吃什么药| 小老弟是什么意思| 小月子可以吃什么水果| 冰晶是什么东西| 文房四宝指的是什么| 打胰岛素是什么病| 大姨妈来吃什么水果好| 停休是什么意思| 舌头干是什么原因| 3月9号是什么星座| 乐字属于五行属什么| 长方脸适合什么样的发型| 身上毛发旺盛什么原因| 什么蔬菜补铁效果最好| 97年属什么生肖| 郭晶晶什么学历| 梦见狗熊是什么预兆| 荨麻疹吃什么食物好| 经常拉肚子吃什么药好| 孑然一身是什么意思| 鸡属于什么类动物| 伯母是什么意思| 鸟飞到头上什么预兆| 情面是什么意思| 94年什么命| 手发麻什么原因| 肉蔻炖肉起什么作用| 狮子是什么科| 戒指戴在食指什么意思| 马为什么不怕蛇毒| 阳萎吃什么药| 嘎巴拉是什么| 粉瘤挂什么科| tissot是什么牌子1853| 什么木质手串最好| 鸡子是什么| 3加2是什么意思| 分心念什么| 一直发低烧是什么原因| 骨科属于什么科| 腋臭是什么原因引起的| 什么是中暑| 6月20号是什么星座| 交媾是什么意思| 吃什么可以增强免疫力| 蚧壳虫用什么药| 八一建军节是什么节日| 男孩叫什么名字| 房颤是什么原因引起的| 支气管舒张试验阳性说明什么| 外阴灼热用什么药| 阴茎瘙痒是什么原因| o型血和ab型血生的孩子是什么血型| 头位是什么意思| 什么时候闰九月| 沉香什么味道| 为什么身上会出现淤青| 9.7是什么星座| 人总放屁是什么原因| 肆意什么意思| 更年期吃什么药调理| 女同学过生日送什么礼物比较好| 7点至9点是什么时辰| 莘莘学子什么意思| 8月27号是什么星座| 胃酸有什么办法缓解| 医保编码是什么| 身上经常痒是什么原因| 小孩肺热吃什么好清肺热| 慢性盆腔炎吃什么药| 频繁放屁是什么原因| 肥达氏反应检查什么病| 阴茎发麻是什么原因| 什么是造影手术| 什么的朋友| 伟哥是什么意思| hrp是什么意思| 口臭吃什么中成药| 大姨妈来了不能吃什么东西| 娃娃脸是什么意思| 来月经可以吃什么| 脑梗病人吃什么营养恢复最好| 什么是蜘蛛痣图片| 大礼是什么意思| 中之人什么意思| 妙三多预防什么| a型血和b型血生的孩子是什么血型| 398是什么意思| 附件是什么意思| 杜字五行属什么| pe和pb是什么意思| 米诺地尔有什么副作用| 中暑吃什么| 梦见坐牢是什么预兆| 乘的部首是什么| 什么是命中注定| 头发为什么会变黄| 头左边痛是什么原因| 星光是什么意思| 71年属什么| 朝对什么| 虐心是什么意思| 蜜蜡脱毛有什么危害吗| 晚上睡不着觉吃什么药| 变态反应是什么意思| 喜闻乐见什么意思| 什么是基因检测| 吃什么利尿| 百度Jump to content

pre是什么的缩写

From Wikipedia, the free encyclopedia
ClassSearch algorithm
Data structureGraph
Worst-case performance
Worst-case space complexity
百度 也就是说外界对于中国在该海域举行的训练都抱有一种天生的质疑。

A* (pronounced "A-star") is a graph traversal and pathfinding algorithm that is used in many fields of computer science due to its completeness, optimality, and optimal efficiency.[1] Given a weighted graph, a source node and a goal node, the algorithm finds the shortest path (with respect to the given weights) from source to goal.

One major practical drawback is its space complexity where d is the depth of the shallowest solution (the length of the shortest path from the source node to any given goal node) and b is the branching factor (the maximum number of successors for any given state), as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance,[2] as well as by memory-bounded approaches; however, A* is still the best solution in many cases.[3]

Peter Hart, Nils Nilsson and Bertram Raphael of Stanford Research Institute (now SRI International) first published the algorithm in 1968.[4] It can be seen as an extension of Dijkstra's algorithm. A* achieves better performance by using heuristics to guide its search.

Compared to Dijkstra's algorithm, the A* algorithm only finds the shortest path from a specified source to a specified goal, and not the shortest-path tree from a specified source to all possible goals. This is a necessary trade-off for using a specific-goal-directed heuristic. For Dijkstra's algorithm, since the entire shortest-path tree is generated, every node is a goal, and there can be no specific-goal-directed heuristic.

History

[edit]
A* was invented by researchers working on Shakey the Robot's path planning.

A* was created as part of the Shakey project, which had the aim of building a mobile robot that could plan its own actions. Nils Nilsson originally proposed using the Graph Traverser algorithm[5] for Shakey's path planning.[6] Graph Traverser is guided by a heuristic function h(n), the estimated distance from node n to the goal node: it entirely ignores g(n), the distance from the start node to n. Bertram Raphael suggested using the sum, g(n) + h(n).[7] Peter Hart invented the concepts we now call admissibility and consistency of heuristic functions. A* was originally designed for finding least-cost paths when the cost of a path is the sum of its costs, but it has been shown that A* can be used to find optimal paths for any problem satisfying the conditions of a cost algebra.[8]

The original 1968 A* paper[4] contained a theorem stating that no A*-like algorithm[a] could expand fewer nodes than A* if the heuristic function is consistent and A*'s tie-breaking rule is suitably chosen. A "correction" was published a few years later[9] claiming that consistency was not required, but this was shown to be false in 1985 in Dechter and Pearl's definitive study of A*'s optimality (now called optimal efficiency), which gave an example of A* with a heuristic that was admissible but not consistent expanding arbitrarily more nodes than an alternative A*-like algorithm.[10]

Description

[edit]
A* pathfinding algorithm navigating around a randomly-generated maze
Illustration of A* search for finding a path between two points on a graph. From left to right, a heuristic that prefers points closer to the goal is used increasingly.

A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a tree of paths originating at the start node and extending those paths one edge at a time until the goal node is reached.

At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path that minimizes

where n is the next node on the path, g(n) is the cost of the path from the start node to n, and h(n) is a heuristic function that estimates the cost of the cheapest path from n to the goal. The heuristic function is problem-specific. If the heuristic function is admissible – meaning that it never overestimates the actual cost to get to the goal – A* is guaranteed to return a least-cost path from start to goal.

Typical implementations of A* use a priority queue to perform the repeated selection of minimum (estimated) cost nodes to expand. This priority queue is known as the open set, fringe or frontier. At each step of the algorithm, the node with the lowest f(x) value is removed from the queue, the f and g values of its neighbors are updated accordingly, and these neighbors are added to the queue. The algorithm continues until a removed node (thus the node with the lowest f value out of all fringe nodes) is a goal node.[b] The f value of that goal is then also the cost of the shortest path, since h at the goal is zero in an admissible heuristic.

The algorithm described so far only gives the length of the shortest path. To find the actual sequence of steps, the algorithm can be easily revised so that each node on the path keeps track of its predecessor. After this algorithm is run, the ending node will point to its predecessor, and so on, until some node's predecessor is the start node.

As an example, when searching for the shortest route on a map, h(x) might represent the straight-line distance to the goal, since that is physically the smallest possible distance between any two points. For a grid map from a video game, using the Taxicab distance or the Chebyshev distance becomes better depending on the set of movements available (4-way or 8-way).

If the heuristic h satisfies the additional condition h(x) ≤ d(x, y) + h(y) for every edge (x, y) of the graph (where d denotes the length of that edge), then h is called monotone, or consistent. With a consistent heuristic, A* is guaranteed to find an optimal path without processing any node more than once and A* is equivalent to running Dijkstra's algorithm with the reduced cost d'(x, y) = d(x, y) + h(y) ? h(x).[11]

Pseudocode

[edit]

The following pseudocode describes the algorithm:

function reconstruct_path(cameFrom, current)
    total_path := {current}
    while current in cameFrom.Keys:
        current := cameFrom[current]
        total_path.prepend(current)
    return total_path

// A* finds a path from start to goal.
// h is the heuristic function. h(n) estimates the cost to reach goal from node n.
function A_Star(start, goal, h)
    // The set of discovered nodes that may need to be (re-)expanded.
    // Initially, only the start node is known.
    // This is usually implemented as a min-heap or priority queue rather than a hash-set.
    openSet := {start}

    // For node n, cameFrom[n] is the node immediately preceding it on the cheapest path from the start
    // to n currently known.
    cameFrom := an empty map

    // For node n, gScore[n] is the currently known cost of the cheapest path from start to n.
    gScore := map with default value of Infinity
    gScore[start] := 0

    // For node n, fScore[n] := gScore[n] + h(n). fScore[n] represents our current best guess as to
    // how cheap a path could be from start to finish if it goes through n.
    fScore := map with default value of Infinity
    fScore[start] := h(start)

    while openSet is not empty
        // This operation can occur in O(Log(N)) time if openSet is a min-heap or a priority queue
        current := the node in openSet having the lowest fScore[] value
        if current = goal
            return reconstruct_path(cameFrom, current)

        openSet.Remove(current)
        for each neighbor of current
            // d(current,neighbor) is the weight of the edge from current to neighbor
            // tentative_gScore is the distance from start to the neighbor through current
            tentative_gScore := gScore[current] + d(current, neighbor)
            if tentative_gScore < gScore[neighbor]
                // This path to neighbor is better than any previous one. Record it!
                cameFrom[neighbor] := current
                gScore[neighbor] := tentative_gScore
                fScore[neighbor] := tentative_gScore + h(neighbor)
                if neighbor not in openSet
                    openSet.add(neighbor)

    // Open set is empty but goal was never reached
    return failure

Remark: In this pseudocode, if a node is reached by one path, removed from openSet, and subsequently reached by a cheaper path, it will be added to openSet again. This is essential to guarantee that the path returned is optimal if the heuristic function is admissible but not consistent. If the heuristic is consistent, when a node is removed from openSet the path to it is guaranteed to be optimal so the test ‘tentative_gScore < gScore[neighbor]’ will always fail if the node is reached again. The pseudocode implemented here is sometimes called the graph-search version of A*.[12] This is in contrast with the version without the ‘tentative_gScore < gScore[neighbor]’ test to add nodes back to openSet, which is sometimes called the tree-search version of A* and require a consistent heuristic to guarantee optimality.

Illustration of A* search for finding path from a start node to a goal node in a robot motion planning problem. The empty circles represent the nodes in the open set, i.e., those that remain to be explored, and the filled ones are in the closed set. Color on each closed node indicates the distance from the goal: the greener, the closer. One can first see the A* moving in a straight line in the direction of the goal, then when hitting the obstacle, it explores alternative routes through the nodes from the open set.

Example

[edit]

An example of an A* algorithm in action where nodes are cities connected with roads and h(x) is the straight-line distance to the target point:

An example of A* algorithm in action (nodes are cities connected with roads, h(x) is the straight-line distance to the target point) Green: Start, Blue: Target, Orange: Visited

Key: green: start; blue: goal; orange: visited

The A* algorithm has real-world applications. In this example, edges are railroads and h(x) is the great-circle distance (the shortest possible distance on a sphere) to the target. The algorithm is searching for a path between Washington, D.C., and Los Angeles.

The A* algorithm finding a path of railroads between Washington, D.C. and Los Angeles.

Implementation details

[edit]

There are a number of simple optimizations or implementation details that can significantly affect the performance of an A* implementation. The first detail to note is that the way the priority queue handles ties can have a significant effect on performance in some situations. If ties are broken so the queue behaves in a LIFO manner, A* will behave like depth-first search among equal cost paths (avoiding exploring more than one equally optimal solution).

When a path is required at the end of the search, it is common to keep with each node a reference to that node's parent. At the end of the search, these references can be used to recover the optimal path. If these references are being kept then it can be important that the same node doesn't appear in the priority queue more than once (each entry corresponding to a different path to the node, and each with a different cost). A standard approach here is to check if a node about to be added already appears in the priority queue. If it does, then the priority and parent pointers are changed to correspond to the lower-cost path. A standard binary heap based priority queue does not directly support the operation of searching for one of its elements, but it can be augmented with a hash table that maps elements to their position in the heap, allowing this decrease-priority operation to be performed in logarithmic time. Alternatively, a Fibonacci heap can perform the same decrease-priority operations in constant amortized time.

Special cases

[edit]

Dijkstra's algorithm, as another example of a uniform-cost search algorithm, can be viewed as a special case of A* where ?? for all x.[13][14] General depth-first search can be implemented using A* by considering that there is a global counter C initialized with a very large value. Every time we process a node we assign C to all of its newly discovered neighbors. After every single assignment, we decrease the counter C by one. Thus the earlier a node is discovered, the higher its ?? value. Both Dijkstra's algorithm and depth-first search can be implemented more efficiently without including an ?? value at each node.

Properties

[edit]

Termination and completeness

[edit]

On finite graphs with non-negative edge weights A* is guaranteed to terminate and is complete, i.e. it will always find a solution (a path from start to goal) if one exists. On infinite graphs with a finite branching factor and edge costs that are bounded away from zero ( for some fixed ), A* is guaranteed to terminate only if there exists a solution.[1]

Admissibility

[edit]

A search algorithm is said to be admissible if it is guaranteed to return an optimal solution. If the heuristic function used by A* is admissible, then A* is admissible. An intuitive "proof" of this is as follows:

Call a node closed if it has been visited and is not in the open set. We close a node when we remove it from the open set. A basic property of the A* algorithm, which we'll sketch a proof of below, is that when ?? is closed, ?? is an optimistic estimate (lower bound) of the true distance from the start to the goal. So when the goal node, ??, is closed, ?? is no more than the true distance. On the other hand, it is no less than the true distance, since it is the length of a path to the goal plus a heuristic term.

Now we'll see that whenever a node ?? is closed, ?? is an optimistic estimate. It is enough to see that whenever the open set is not empty, it has at least one node ?? on an optimal path to the goal for which ?? is the true distance from start, since in that case ?? + ?? underestimates the distance to goal, and therefore so does the smaller value chosen for the closed vertex. Let ?? be an optimal path from the start to the goal. Let ?? be the last closed node on ?? for which ?? is the true distance from the start to the goal (the start is one such vertex). The next node in ?? has the correct ?? value, since it was updated when ?? was closed, and it is open since it is not closed.

Optimality and consistency

[edit]

Algorithm A is optimally efficient with respect to a set of alternative algorithms Alts on a set of problems P if for every problem P in P and every algorithm A′ in Alts, the set of nodes expanded by A in solving P is a subset (possibly equal) of the set of nodes expanded by A′ in solving P. The definitive study of the optimal efficiency of A* is due to Rina Dechter and Judea Pearl.[10] They considered a variety of definitions of Alts and P in combination with A*'s heuristic being merely admissible or being both consistent and admissible. The most interesting positive result they proved is that A*, with a consistent heuristic, is optimally efficient with respect to all admissible A*-like search algorithms on all "non-pathological" search problems. Roughly speaking, their notion of the non-pathological problem is what we now mean by "up to tie-breaking". This result does not hold if A*'s heuristic is admissible but not consistent. In that case, Dechter and Pearl showed there exist admissible A*-like algorithms that can expand arbitrarily fewer nodes than A* on some non-pathological problems.

Optimal efficiency is about the set of nodes expanded, not the number of node expansions (the number of iterations of A*'s main loop). When the heuristic being used is admissible but not consistent, it is possible for a node to be expanded by A* many times, an exponential number of times in the worst case.[15] In such circumstances, Dijkstra's algorithm could outperform A* by a large margin. However, more recent research found that this pathological case only occurs in certain contrived situations where the edge weight of the search graph is exponential in the size of the graph and that certain inconsistent (but admissible) heuristics can lead to a reduced number of node expansions in A* searches.[16][17]

Bounded relaxation

[edit]
A* search that uses a heuristic that is 5.0(=ε) times a consistent heuristic, and obtains a suboptimal path

While the admissibility criterion guarantees an optimal solution path, it also means that A* must examine all equally meritorious paths to find the optimal path. To compute approximate shortest paths, it is possible to speed up the search at the expense of optimality by relaxing the admissibility criterion. Oftentimes we want to bound this relaxation, so that we can guarantee that the solution path is no worse than (1 + ε) times the optimal solution path. This new guarantee is referred to as ε-admissible.

There are a number of ε-admissible algorithms:

  • Weighted A*/Static Weighting's.[18] If ha(n) is an admissible heuristic function, in the weighted version of the A* search one uses hw(n) = ε ha(n), ε > 1 as the heuristic function, and perform the A* search as usual (which eventually happens faster than using ha since fewer nodes are expanded). The path hence found by the search algorithm can have a cost of at most ε times that of the least cost path in the graph.[19]
  • Convex Upward/Downward Parabola (XUP/XDP). [20] Modification to the cost function in weighted A* to push optimality toward the start or goal. XDP gives paths which are near optimal close to the start, and XUP paths are near-optimal close to the goal. Both yield -optimal paths overall.
    .
    .
  • Piecewise Upward/Downward Curve (pwXU/pwXD).[21] Similar to XUP/XDP but with piecewise functions instead of parabola. Solution paths are also -optimal.
  • Dynamic Weighting[22] uses the cost function ??, where , and where ?? is the depth of the search and N is the anticipated length of the solution path.
  • Sampled Dynamic Weighting[23] uses sampling of nodes to better estimate and debias the heuristic error.
  • .[24] uses two heuristic functions. The first is the FOCAL list, which is used to select candidate nodes, and the second hF is used to select the most promising node from the FOCAL list.
  • Aε[25] selects nodes with the function ??, where A and B are constants. If no nodes can be selected, the algorithm will backtrack with the function ??, where C and D are constants.
  • AlphA*[26] attempts to promote depth-first exploitation by preferring recently expanded nodes. AlphA* uses the cost function , where , where λ and Λ are constants with , π(n) is the parent of n, and ? is the most recently expanded node.

Complexity

[edit]

As a heuristic search algorithm, the performance of A* is heavily influenced by the quality of the heuristic function . If the heuristic closely approximates the true cost to the goal, A* can significantly reduce the number of node expansions. On the other hand, a poor heuristic can lead to many unnecessary expansions.

Worst-Case Scenario

[edit]

In the worst case, A* expands all nodes for which , where is the cost of the optimal goal node.

Why can’t it be worse

[edit]

Suppose there is a node in the open list with , and it's the next node to be expanded. Since the goal node has , and , the goal node will have a lower f-value and will be expanded before . Therefore, A* never expands nodes with .

Why can’t it be better

[edit]

Assume there exists an optimal algorithm that expands fewer nodes than in the worst case using the same heuristic. That means there must be some node such that , yet the algorithm chooses not to expand it.

Now consider a modified graph where a new edge of cost (with ) is added from to the goal. If , then the new optimal path goes through . However, since the algorithm still avoids expanding , it will miss the new optimal path, violating its optimality.

Therefore, no optimal algorithm including A* could expand fewer nodes than in the worst case.

Mathematical Notation

[edit]

The worst-case complexity of A* is often described as , where is the branching factor and is the depth of the shallowest goal. While this gives a rough intuition, it does not precisely capture the actual behavior of A*.

A more accurate bound considers the number of nodes with . If is the smallest possible difference in -cost between distinct nodes, then A* may expand up to:

This represents both the time and space complexity in the worst case.

Space Complexity

[edit]

The space complexity of A* is roughly the same as that of all other graph search algorithms, as it keeps all generated nodes in memory.[1] In practice, this turns out to be the biggest drawback of the A* search, leading to the development of memory-bounded heuristic searches, such as Iterative deepening A*, memory-bounded A*, and SMA*.

Applications

[edit]

A* is often used for the common pathfinding problem in applications such as video games, but was originally designed as a general graph traversal algorithm.[4] It finds applications in diverse problems, including the problem of parsing using stochastic grammars in NLP.[27] Other cases include an Informational search with online learning.[28]

Relations to other algorithms

[edit]

What sets A* apart from a greedy best-first search algorithm is that it takes the cost/distance already traveled, g(n), into account.

Some common variants of Dijkstra's algorithm can be viewed as a special case of A* where the heuristic for all nodes;[13][14] in turn, both Dijkstra and A* are special cases of dynamic programming.[29] A* itself is a special case of a generalization of branch and bound.[30]

A* is similar to beam search except that beam search maintains a limit on the numbers of paths that it has to explore.[31]

Variants

[edit]

A* can also be adapted to a bidirectional search algorithm, but special care needs to be taken for the stopping criterion.[35]

See also

[edit]

Notes

[edit]
  1. ^ "A*-like" means the algorithm searches by extending paths originating at the start node one edge at a time, just as A* does. This excludes, for example, algorithms that search backward from the goal or in both directions simultaneously. In addition, the algorithms covered by this theorem must be admissible, and "not more informed" than A*.
  2. ^ Goal nodes may be passed over multiple times if there remain other nodes with lower f values, as they may lead to a shorter path to a goal.

References

[edit]
  1. ^ a b c Russell, Stuart J.; Norvig, Peter (2018). Artificial intelligence a modern approach (4th ed.). Boston: Pearson. ISBN 978-0134610993. OCLC 1021874142.
  2. ^ Delling, D.; Sanders, P.; Schultes, D.; Wagner, D. (2009). "Engineering Route Planning Algorithms". Algorithmics of Large and Complex Networks: Design, Analysis, and Simulation. Lecture Notes in Computer Science. Vol. 5515. Springer. pp. 117–139. doi:10.1007/978-3-642-02094-0_7. ISBN 978-3-642-02093-3.
  3. ^ Zeng, W.; Church, R. L. (2009). "Finding shortest paths on real road networks: the case for A*". International Journal of Geographical Information Science. 23 (4): 531–543. Bibcode:2009IJGIS..23..531Z. doi:10.1080/13658810801949850. S2CID 14833639.
  4. ^ a b c Hart, P. E.; Nilsson, N.J.; Raphael, B. (1968). "A Formal Basis for the Heuristic Determination of Minimum Cost Paths". IEEE Transactions on Systems Science and Cybernetics. 4 (2): 100–7. doi:10.1109/TSSC.1968.300136.
  5. ^ Doran, J. E.; Michie, D. (2025-08-04). "Experiments with the Graph Traverser program". Proc. R. Soc. Lond. A. 294 (1437): 235–259. Bibcode:1966RSPSA.294..235D. doi:10.1098/rspa.1966.0205. S2CID 21698093.
  6. ^ Nilsson, Nils J. (2025-08-04). The Quest for Artificial Intelligence (PDF). Cambridge: Cambridge University Press. ISBN 9780521122931. One of the first problems we considered was how to plan a sequence of 'way points' that Shakey could use in navigating from place to place. […] Shakey's navigation problem is a search problem, similar to ones I have mentioned earlier.
  7. ^ Nilsson, Nils J. (2025-08-04). The Quest for Artificial Intelligence (PDF). Cambridge: Cambridge University Press. ISBN 9780521122931. Bertram Raphael, who was directing work on Shakey at that time, observed that a better value for the score would be the sum of the distance traveled so far from the initial position plus my heuristic estimate of how far the robot had to go.
  8. ^ Edelkamp, Stefan; Jabbar, Shahid; Lluch-Lafuente, Alberto (2005). "Cost-Algebraic Heuristic Search". Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI) (PDF). pp. 1362–1367. ISBN 978-1-57735-236-5.
  9. ^ Hart, Peter E.; Nilsson, Nils J.; Raphael, Bertram (2025-08-04). "Correction to 'A Formal Basis for the Heuristic Determination of Minimum Cost Paths'" (PDF). ACM SIGART Bulletin (37): 28–29. doi:10.1145/1056777.1056779. S2CID 6386648.
  10. ^ a b Dechter, Rina; Judea Pearl (1985). "Generalized best-first search strategies and the optimality of A*". Journal of the ACM. 32 (3): 505–536. doi:10.1145/3828.3830. S2CID 2092415.
  11. ^ Nannicini, Giacomo; Delling, Daniel; Schultes, Dominik; Liberti, Leo (2012). "Bidirectional A* search on time-dependent road networks" (PDF). Networks. 59 (2): 240–251. doi:10.1002/NET.20438.
  12. ^ Russell, Stuart J.; Norvig, Peter (2009). Artificial intelligence: A modern approach (3rd ed.). Boston: Pearson. p. 95. ISBN 978-0136042594.
  13. ^ a b De Smith, Michael John; Goodchild, Michael F.; Longley, Paul (2007), Geospatial Analysis: A Comprehensive Guide to Principles, Techniques and Software Tools, Troubadour Publishing Ltd, p. 344, ISBN 9781905886609.
  14. ^ a b Hetland, Magnus Lie (2010), Python Algorithms: Mastering Basic Algorithms in the Python Language, Apress, p. 214, ISBN 9781430232377, archived from the original on 15 February 2022.
  15. ^ Martelli, Alberto (1977). "On the Complexity of Admissible Search Algorithms". Artificial Intelligence. 8 (1): 1–13. doi:10.1016/0004-3702(77)90002-9.
  16. ^ Felner, Ariel; Uzi Zahavi (2011). "Inconsistent heuristics in theory and practice". Artificial Intelligence. 175 (9–10): 1570–1603. doi:10.1016/j.artint.2011.02.001.
  17. ^ Zhang, Zhifu; N. R. Sturtevant (2009). Using Inconsistent Heuristics on A* Search. Twenty-First International Joint Conference on Artificial Intelligence. pp. 634–639.
  18. ^ Pohl, Ira (1970). "First results on the effect of error in heuristic search". Machine Intelligence 5. Edinburgh University Press: 219–236. ISBN 978-0-85224-176-9. OCLC 1067280266.
  19. ^ Pearl, Judea (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley. ISBN 978-0-201-05594-8.
  20. ^ Chen, Jingwei; Sturtevant, Nathan R. (2019). "Conditions for Avoiding Node Re-expansions in Bounded Suboptimal Search". Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization: 1220–1226.
  21. ^ Chen, Jingwei; Sturtevant, Nathan R. (2025-08-04). "Necessary and Sufficient Conditions for Avoiding Reopenings in Best First Suboptimal Search with General Bounding Functions". Proceedings of the AAAI Conference on Artificial Intelligence. 35 (5): 3688–3696. doi:10.1609/aaai.v35i5.16485. ISSN 2374-3468.
  22. ^ Pohl, Ira (August 1973). "The avoidance of (relative) catastrophe, heuristic competence, genuine dynamic weighting and computational issues in heuristic problem solving" (PDF). Proceedings of the Third International Joint Conference on Artificial Intelligence (IJCAI-73). Vol. 3. California, USA. pp. 11–17.
  23. ^ K?ll, Andreas; Hermann Kaindl (August 1992). "A new approach to dynamic weighting". Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI-92). Vienna, Austria: Wiley. pp. 16–17. ISBN 978-0-471-93608-4.
  24. ^ Pearl, Judea; Jin H. Kim (1982). "Studies in semi-admissible heuristics". IEEE Transactions on Pattern Analysis and Machine Intelligence. 4 (4): 392–399. doi:10.1109/TPAMI.1982.4767270. PMID 21869053. S2CID 3176931.
  25. ^ Ghallab, Malik; Dennis Allard (August 1983). "Aε – an efficient near admissible heuristic search algorithm" (PDF). Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). Vol. 2. Karlsruhe, Germany. pp. 789–791. Archived from the original (PDF) on 2025-08-04.
  26. ^ Reese, Bj?rn (1999). AlphA*: An ε-admissible heuristic search algorithm (Report). Institute for Production Technology, University of Southern Denmark. Archived from the original on 2025-08-04. Retrieved 2025-08-04.
  27. ^ Klein, Dan; Manning, Christopher D. (2003). "A* parsing: fast exact Viterbi parse selection" (PDF). Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. pp. 119–126. doi:10.3115/1073445.1073461.
  28. ^ Kagan E.; Ben-Gal I. (2014). "A Group-Testing Algorithm with Online Informational Learning" (PDF). IIE Transactions. 46 (2): 164–184. doi:10.1080/0740817X.2013.803639. S2CID 18588494. Archived from the original (PDF) on 2025-08-04. Retrieved 2025-08-04.
  29. ^ Ferguson, Dave; Likhachev, Maxim; Stentz, Anthony (2005). "A Guide to Heuristic-based Path Planning" (PDF). Proceedings of the international workshop on planning under uncertainty for autonomous systems, international conference on automated planning and scheduling (ICAPS). pp. 9–18. Archived (PDF) from the original on 2025-08-04.
  30. ^ Nau, Dana S.; Kumar, Vipin; Kanal, Laveen (1984). "General branch and bound, and its relation to A? and AO?" (PDF). Artificial Intelligence. 23 (1): 29–58. doi:10.1016/0004-3702(84)90004-3. Archived (PDF) from the original on 2025-08-04.
  31. ^ "Variants of A*". theory.stanford.edu. Retrieved 2025-08-04.
  32. ^ Hansen, Eric A.; Zhou, Rong (2007). "Anytime Heuristic Search". Journal of Artificial Intelligence Research. 28: 267–297. arXiv:1110.2737. doi:10.1613/jair.2096. S2CID 9832874.
  33. ^ Fareh, Raouf; Baziyad, Mohammed; Rahman, Mohammad H.; Rabie, Tamer; Bettayeb, Maamar (2025-08-04). "Investigating Reduced Path Planning Strategy for Differential Wheeled Mobile Robot". Robotica. 38 (2): 235–255. doi:10.1017/S0263574719000572. ISSN 0263-5747. S2CID 181849209.
  34. ^ Pijls, Wim; Post, Henk. Yet another bidirectional algorithm for shortest paths (PDF) (Technical report). Econometric Institute, Erasmus University Rotterdam. EI 2009-10. Archived (PDF) from the original on 2025-08-04.
  35. ^ Goldberg, Andrew V.; Harrelson, Chris; Kaplan, Haim; Werneck, Renato F. "Efficient Point-to-Point Shortest Path Algorithms" (PDF). Princeton University. Archived (PDF) from the original on 18 May 2022.

Further reading

[edit]
[edit]
胰腺炎适合吃什么食物 梦见亲人哭是什么征兆 何去何从是什么意思 脚热是什么原因引起的 狮子头是什么肉
查脂肪肝挂什么科室 什么食物含钙 prep是什么药 早搏是什么症状 跳蚤为什么只咬一个人
肝脏挂什么科 喝牛奶不能和什么一起吃 下体有异味是什么原因 胆固醇高有什么症状 胸部胀痛是什么原因
什么是外心 ojbk什么意思 zara属于什么档次 桑葚什么时候成熟 坐骨神经吃什么药
老花镜什么品牌好hcv7jop5ns1r.cn 五月五日什么星座hcv7jop5ns0r.cn 农历五月二十四是什么星座hcv9jop6ns6r.cn 蜂蜜加白醋有什么功效hcv7jop9ns9r.cn 蜻蜓属于什么类动物hcv9jop2ns6r.cn
小孩拉稀吃什么药adwl56.com 自作多情是什么意思hcv9jop0ns3r.cn 劲酒是什么酒hcv9jop0ns9r.cn swag是什么意思hcv9jop2ns5r.cn 外婆的妈妈叫什么hcv9jop1ns2r.cn
能屈能伸是什么生肖xinjiangjialails.com 多囊为什么要吃避孕药hcv9jop6ns2r.cn 眼睛干痒用什么眼药水比较好hcv8jop3ns8r.cn 刘邦是汉什么帝dayuxmw.com 开屏是什么意思hcv7jop5ns2r.cn
温州冬至吃什么hcv8jop3ns9r.cn 地级市副市长是什么级别hcv7jop9ns9r.cn 什么是龟头炎hcv8jop6ns7r.cn 海带与什么食物相克hcv8jop7ns6r.cn 00年是什么年hcv7jop5ns1r.cn
百度