Matches in DBpedia 2014 for { <http://dbpedia.org/resource/Partially_observable_Markov_decision_process> ?p ?o. }
Showing items 1 to 31 of
31
with 100 items per page.
- Partially_observable_Markov_decision_process abstract "A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.The POMDP framework is general enough to model a variety of real-world sequential decision processes. Applications include robot navigation problems, machine maintenance, and planning under uncertainty in general. The framework originated in the operations research community, and was later taken over by the artificial intelligence and automated planning communities.An exact solution to a POMDP yields the optimal action for each possible belief over the world states. The optimal action maximizes (or minimizes) the expected reward (or cost) of the agent over a possibly infinite horizon. The sequence of optimal actions is known as the optimal policy of the agent for interacting with its environment.".
- Partially_observable_Markov_decision_process wikiPageExternalLink index.php?n=Main.HomePage.
- Partially_observable_Markov_decision_process wikiPageExternalLink pypomdp.
- Partially_observable_Markov_decision_process wikiPageExternalLink index.shtml.
- Partially_observable_Markov_decision_process wikiPageExternalLink zmdp.
- Partially_observable_Markov_decision_process wikiPageExternalLink spudd.
- Partially_observable_Markov_decision_process wikiPageID "3063552".
- Partially_observable_Markov_decision_process wikiPageRevisionID "583409719".
- Partially_observable_Markov_decision_process hasPhotoCollection Partially_observable_Markov_decision_process.
- Partially_observable_Markov_decision_process subject Category:Dynamic_programming.
- Partially_observable_Markov_decision_process subject Category:Markov_processes.
- Partially_observable_Markov_decision_process subject Category:Stochastic_control.
- Partially_observable_Markov_decision_process type Abstraction100002137.
- Partially_observable_Markov_decision_process type Act100030358.
- Partially_observable_Markov_decision_process type Activity100407535.
- Partially_observable_Markov_decision_process type Event100029378.
- Partially_observable_Markov_decision_process type MarkovProcesses.
- Partially_observable_Markov_decision_process type Procedure101023820.
- Partially_observable_Markov_decision_process type PsychologicalFeature100023100.
- Partially_observable_Markov_decision_process type YagoPermanentlyLocatedEntity.
- Partially_observable_Markov_decision_process comment "A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process. A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state.".
- Partially_observable_Markov_decision_process label "Partially observable Markov decision process".
- Partially_observable_Markov_decision_process label "Processus de décision markovien partiellement observable".
- Partially_observable_Markov_decision_process label "部分可觀察馬可夫決策過程".
- Partially_observable_Markov_decision_process sameAs Processus_de_décision_markovien_partiellement_observable.
- Partially_observable_Markov_decision_process sameAs m.08p0k3.
- Partially_observable_Markov_decision_process sameAs Q176814.
- Partially_observable_Markov_decision_process sameAs Q176814.
- Partially_observable_Markov_decision_process sameAs Partially_observable_Markov_decision_process.
- Partially_observable_Markov_decision_process wasDerivedFrom Partially_observable_Markov_decision_process?oldid=583409719.
- Partially_observable_Markov_decision_process isPrimaryTopicOf Partially_observable_Markov_decision_process.