Synthesis of Search Heuristics for Temporal Planning via Reinforcement Learning
by Andrea Micheli and Alessandro Valentini, in AAAI 2021
Abstract
Automated temporal planning is the problem of synthesizing, starting from a model of a system, a course of actions to achieve a desired goal when temporal constraints, such as deadlines, are present in the problem. Despite considerable successes in the literature, scalability is still a severe limitation for existing planners, especially when confronted with real-world, industrial scenarios.
In this paper, we aim at exploiting recent advances in reinforcement learning, for the synthesis of heuristics for temporal planning. Starting from a set of problems of interest for a specific domain, we use a customized reinforcement learning algorithm to construct a value function that is able to estimate the expected reward for as many problems as possible. We use a reward schema that captures the semantics of the temporal planning problem and we show how the value function can be transformed in a planning heuristic for a semi-symbolic heuristic search exploration of the planning model. We show on two case-studies how this method can widen the reach of current temporal planners with encouraging results.
Temporal Planning with Intermediate Conditions and Effects
by Alessandro Valentini, Andrea Micheli and Alessandro Cimatti, in AAAI 2020
Abstract
Automated temporal planning is the technology of choice when controlling systems that can execute more actions in parallel and when temporal constraints, such as deadlines, are needed in the model. One limitation of several action-based planning systems is that actions are modeled as intervals having conditions and effects only at the extremes and as invariants, but no conditions nor effects can be specified at arbitrary points or sub-intervals.
In this paper, we address this limitation by providing an effective heuristic-search technique for temporal planning, allowing the definition of actions with conditions and effects at any arbitrary time within the action duration. We experimentally demonstrate that our approach is far better than standard encodings in PDDL 2.1 and is competitive with other approaches that can (directly or indirectly) represent intermediate action conditions or effects.
Strong Temporal Planning with Uncontrollable Durations
by Alessandro Cimatti, Minh Do, Andrea Micheli, Marco Roveri and David E. Smith, in Artificial Intelligence 2017
Abstract
Planning in real world domains often involves modeling and reasoning about the duration of actions. Temporal planning allows such modeling and reasoning by looking for plans that specify start and end time points for each action. In many practical cases, however, the duration of actions may be uncertain and not under the full control of the executor. For example, a navigation task may take more or less time, depending on external conditions such as terrain or weather.
In this paper, we tackle the problem of strong temporal planning with uncontrollable action durations (STPUD). For actions with uncontrollable durations, the planner is only allowed to choose the start of the actions, while the end is chosen, within known bounds, by the environment. A solution plan must be robust with respect to all uncontrollable action durations, and must achieve the goal on all executions, despite the choices of the environment.
We propose two complementary techniques. First, we discuss a dedicated planning method, that generalizes the state-space temporal planning framework, leveraging SMT-based techniques for temporal networks under uncertainty. Second, we present a compilation-based method, that reduces any STPUD problem to an ordinary temporal planning problem. Moreover, we investigate a set of sufficient conditions to simplify domains by removing some of the uncontrollability.
We implemented both our approaches, and we experimentally evaluated our techniques on a large number of instances. Our results demonstrate the practical applicability of the two techniques, which show complementary behavior.
Validating Domains and Plans for Temporal Planning via Encoding into Infinite-State Linear Temporal Logic
by Alessandro Cimatti, Andrea Micheli and Marco Roveri, in AAAI 2017
Abstract
Temporal planning is an active research area of Artificial Intelligence because of its many applications ranging from robotics to logistics and beyond. Traditionally, authors focused on the automatic synthesis of plans given a formal representation of the domain and of the problem. However, the effectiveness of such techniques is limited by the complexity of the modeling phase: it is hard to produce a correct model for the planning problem at hand.
In this paper, we present a technique to simplify the creation of correct models by leveraging formal-verification tools for automatic validation. We start by using the ANML language, a very expressive language for temporal planning problems that has been recently presented. We chose ANML because of its usability and readability. Then, we present a sound-and-complete, formal encoding of the language into Linear Temporal Logic over predicates with infinite-state variables. Thanks to this reduction, we enable the formal verification of several relevant properties over the planning problem, providing useful feedback to the modeler.