Quantization
PlanFM State-Centric Quantization
Code for quantization experiments associated with compact state-centric planning models and efficient deployment of planning foundation model components.
PlanFM Resources
Repositories and datasets connected to PlanFM research on compact foundation models for planning-like tasks, state-centric representations, learned transition models, and downstream planning evaluation.
Quantization
Code for quantization experiments associated with compact state-centric planning models and efficient deployment of planning foundation model components.
Tokenization
Code for representing symbolic planning states as learning-ready tokens, including tokenization choices used to study generalization in learned transition models.
Plan Validity
Code for the downstream plan-validity task, where state-centric learning is used to assess whether generated plan traces satisfy planning constraints.
Generalized Planning
Official implementation for learning transition dynamics over state representations and decoding executable plans through symbolic successor selection.
Benchmark Dataset
FABLE evaluates language models on data-flow analysis over procedural text, including planning scenarios, travel routes, and recipes. The FABLE+ dataset is available on Hugging Face, with code and benchmark generation support in the GitHub repository.
PhD Dissertation
Dissertation work by Vishal Pallagani on language-model methods for automated planning, compact state-centric planning models, planning ontology, plan summarization, neuro-symbolic planning architectures, and applied decision-making systems.