A concise introduction to models and methods for automated planning
Planning is the model-based approach to autonomous behavior where the agent behavior is derived automatically from a model of the actions, sensors, and goals. The main challenges in planning are computational as all models, whether featuring uncertainty and feedback or not, are intractable in the worst case when represented in compact form. In this book, we look at a variety of models used in AI planning, and at the methods that have been developed for solving them. The goal is to provide a modern and coherent view of planning that is precise, concise, and mostly self-contained, without being shallow. For this, we make no attempt at covering the whole variety of planning approaches, ideas, and applications, and focus on the essentials. The target audience of the book are students and researchers interested in autonomous behavior and planning from an AI, engineering, or cognitive science perspective
1 online resource (xii, 129 pages) : illustrations
9781608459704, 9781608459698, 9783031015649, 1608459705, 1608459691, 3031015649
853278184
Print version:
1. Planning and autonomous behavior
1.1 Autonomous behavior: hardwired, learned, and model-based
1.2 Planning models and languages
1.3 Generality, complexity, and scalability
1.4 Examples
1.5 Generalized planning: plans vs. general strategies
1.6 History. 2. Classical planning: full information and deterministic actions
2.1 Classical planning model
2.2 Classical planning as path finding
2.3 Search algorithms: blind and heuristic
2.4 Online search: thinking and acting interleaved
2.5 Where do heuristics come from?
2.6 Languages for classical planning
2.7 Domain-independent heuristics and relaxations
2.8 Heuristic search planning
2.9 Decomposition and goal serialization
2.10 Structure, width, and complexity. 3. Classical planning: variations and extensions
3.1 Relaxed plans and helpful actions
3.2 Multi-queue best-first search
3.3 Implicit subgoals: landmarks
3.4 State-of-the-art classical planners
3.5 Optimal planning and admissible heuristics
3.6 Branching schemes and problem spaces
3.7 Regression planning
3.8 Planning as SAT and constraint satisfaction
3.9 Partial-order causal link planning
3.10 Cost, metric, and temporal planning
3.11 Hierarchical task networks. 4. Beyond classical planning: transformations
4.1 Soft goals and rewards
4.2 Incomplete information
4.3 Plan and goal recognition
4.4 Finite-state controllers
4.5 Temporally extended goals. 5. Planning with sensing: logical models
5.1 Model and language
5.2 Solutions and solution forms
5.3 Offline solution methods
5.4 Online solution methods
5.5 Belief tracking: width and complexity
5.6 Strong vs. strong cyclic solutions. 6. MDP planning: stochastic actions and full feedback
6.1 Goal, shortest-path, and discounted models
6.2 Dynamic programming algorithms
6.3 Heuristic search algorithms
6.4 Online MDP planning
6.5 Reinforcement learning, model-based RL, and planning. 7. POMDP planning: stochastic actions and partial feedback
7.1 Goal, shortest-path, and discounted POMDPs
7.2 Exact offline algorithms
7.3 Approximate and online algorithms
7.4 Belief tracking in POMDPs
7.5 Other MDP and POMDP solution methods. 8. Discussion
8.1 Challenges and open problems
8.2 Planning, scalability, and cognition. Bibliography
Author's biography
Part of: Synthesis digital library of engineering and computer science