Config:Plan

From SUMOwiki
Jump to: navigation, search

Generated for SUMO toolbox version 7.0. We are well aware that documentation is not always complete and possibly even out of date in some cases. We try to document everything as best we can but much is limited by available time and manpower. We are an university research group after all. The most up to date documentation can always be found (if not here) in the default.xml configuration file and, of course, in the source files. If something is unclear please dont hesitate to ask.

Contents

Plan

ContextConfig

Default components, these should normally not be changed unless you know what you are doing

<!--Default components, these should normally not be changed unless you know what you are doing-->
   <ContextConfig>default</ContextConfig>

SUMO

Default components, these should normally not be changed unless you know what you are doing

<!--Default components, these should normally not be changed unless you know what you are doing-->
   <SUMO>default</SUMO>

LevelPlot

Default components, these should normally not be changed unless you know what you are doing

<!--Default components, these should normally not be changed unless you know what you are doing-->
   <LevelPlot>default</LevelPlot>

Simulator

This is the problem we are going to model, it refers to the name of a project directory in the examples/ folder. It is also possible to specify an absolute path or to specify a particular xml file within a project directory

<!--This is the problem we are going to model, it refers to the name of a project directory in the examples/ folder. It is also possible to specify an absolute path or to specify a particular xml file within a project directory-->
   <Simulator>Math/Academic2DTwice</Simulator>

Run

Runs can given a custom name by using the name attribute, a repeat attribute is also possible to repeat a run multiple times. Placeholders available for run names include: #adaptivemodelbuilder# #simulator# #sampleselector# #output# #measure#

<!--Runs can given a custom name by using the name attribute, a repeat attribute is also possible to repeat a run multiple times. Placeholders available for run names include: #adaptivemodelbuilder# #simulator# #sampleselector# #output# #measure#-->
   <Run name="" repeat="1">
      <!-- Enties listed here override those defined on plan level -->
 
      <!-- What experimental design to use for the very first set of samples -->
      <InitialDesign>lhdWithCornerPoints</InitialDesign>
 
      <!--
          The method to use for selecting new samples. Again 'default' is an id that refers to a
          SampleSelector tag defined below.  To switch off sampling simply remove this tag. -->
      <SampleSelector>default</SampleSelector>
 
      <!--
      How is the simulator implemented (ie, where does the data come from): 
        - Matlab script (matlab)
        - scattered dataset (scatteredDataset), 
        - local executable or script (local)
        - etc
 
        Make sure this entry matches what is declared in the simulator xml file
        in the project directory.  For example, it makes no sense to put matlab here if you only
        have a scattered dataset to work with.
      -->
      <SampleEvaluator>matlab</SampleEvaluator>
 
      <!--
          The AdaptiveModelBuilder specifies the model type and the hyperparameter optimization
          algorithm (= the algorithm to choose the model parameters, also referred to as the
          modeling algorithm or model builder) to use. The default value 'kriging' refers to Kriging models.
          'kriging' is an id that refers to an AdaptiveModelBuilder tag that is defined below.
      -->
      <AdaptiveModelBuilder>kriging</AdaptiveModelBuilder>
 
      <!-- How the quality of a model is assesed is determined by one or more Measures.  You can try different combinations
           of measures by specifying different measure tags.  It is the measure score(s) that drive the model parameter optimization.
           We recommend you do not use more than one measure unless you know what you are doing.
 
           If the use attribute is set to 'off' then the measure score is printed and logged, but is not used in the modeling itself.
          More examples of measures are shown below.
      -->
 
      <Measure type="CrossValidation" target="0.01" errorFcn="rootRelativeSquareError" use="on"/>
 
      <!-- By default all inputs are modeled.  If you want to only model a couple of inputs you can specify an Inputs tag as follows: 
 
      <Inputs>

         <Input name="x" />
         <Input name="y" />
         // Setting a simulator input to a constant (default is 0):
         <Input name="z"  value="14.6"/>
      </Inputs>
      -->
 
      <!--          
      By default the toolbox will model every single output using a separate model.  If you want to change this
      e.g., you only want to model a specific output, or you want to use different settings for each output; then you
      can specify an Outputs tag.
 
      The following is an example for the Academic2DTwice problem used in this file.  Remember that if you change
      the problem you are modeling, you will have to change this section too.
      -->
      <Outputs>
         <Output name="out">
            <!--
                You can specify output specific configuration here
 
            <SampleSelector>
lola</SampleSelector>
            <AdaptiveModelBuilder>rational</AdaptiveModelBuilder>
            <Measure type="CrossValidation" target=".01" errorFcn="meanSquareError" use="on" />
            -->
         </Output>
 
         <Output name="outinverse">
            <!--
            <SampleSelector>
delaunay</SampleSelector>
            <AdaptiveModelBuilder>rbf</AdaptiveModelBuilder>
            <Measure type="ValidationSet" target=".05" use="on" />
            -->
         </Output>
 
      </Outputs>
 
      <!--   
         This is a more complex example of how you can have different configurations per output.
      -->
      <!--
      <Outputs>

 
         * Model the modulus of complex output S22 using cross-validation and the default model
         builder and sample selector.
 
         <Output name="S22" complexHandling="modulus">
            <Measure type="CrossValidation" target=".05" />
         </Output>
 
         * Model the real part of complex output S22, but introduce some normally-distributed noise
         (variance .01 by default).
 
         <Output name="S22" complexHandling="real">
            <Measure type="CrossValidation" target=".05" />
            * for other types of modifiers see the datamodifiers subdirectory
            <Modifier type="Noise" />
         </Output>
      -->
 
       <!-- 
      More complex examples of how you can use measures:
 
      * 5-fold crossvalidation (warning expensive on some model types!)
      <Measure type="CrossValidation" target=".001" use="on">

         <Option key="folds" value="5"/>
      </Measure>   
 
      * Using a validation set, the size taken as 20% of the available samples
      <Measure type="ValidationSet" target=".001" errorFcn="meanAbsoluteError">
         <Option key="percentUsed" value="20"/>
      </Measure>
 
      * Using a validation set defined in an external file (scattered data)
             <Measure type="ValidationSet" target=".001">
            * the validation set come from a file
            <Option key="type" value="file"/>
            * the test data is scattered data so we need a scattered sample evaluator
            to load the data and evaluate the points. The filename is taken from the
            <ScatteredDataFile> tag in the simulator xml file.
            Optionally you can specify an option with key "id" to specify a specifc
            dataset if there is more than one choice.
            <SampleEvaluator
            type="ibbt.sumo.sampleevaluators.datasets.ScatteredDatasetSampleEvaluator"/>
                     </Measure>
 
      * Used for testing optimization problems
         * Calculates the (relative) error between the current minimum and a known minimum.
           Often one uses this just as a stopping criterion for benchmarking problems.
         * trueValue: a known global minimum
      <Measure type="TestMinimum" errorFcn="relativeError" trueValue="-5.0" target="0.1" use="on" />   
      -->
   </Run>
Personal tools