-
Notifications
You must be signed in to change notification settings - Fork 1
v7d_transform manpage
Usage: v7d_transform [options] inputfile1 [inputfile2...] outputfile
Vol7d transformation application, it imports a vol7d volume of sparse point data from a native vol7d file, from a dbAll.e database, from a BUFR/CREX file, from SIM Oracle database and exports it into a native v7d file, into a BUFR/CREX file, into a GRIB file, into a configurable geojson file, into a netcdf file, or into a configurable formatted csv file. If input-format is of file type, inputfile '-' indicates stdin, if input-format or output-format is of database type, inputfile/outputfile specifies database access info in the form user/password@dsn, if empty or '-', a suitable default is used. If output-format is of file type, outputfile '-' indicates stdout.
--input-format=STRING
format of input, 'native' for vol7d native binary file, 'BUFR' for BUFR file with generic template, 'CREX' for CREX file, 'dba' for dballe database, 'orsim' for SIM Oracle database [default=native]
-c STRING, --coord-file=STRING
file with horizontal coordinates of target interpolation points or with vertical coordinate of input levels, required if a geographical transformation is requested
--coord-format=STRING
format of input file with coordinates, 'native' for vol7d native binary file, 'BUFR' for BUFR file, 'CREX' for CREX file (sparse points), 'shp' for shapefile (sparse points or polygons) [default=BUFR]
--extreme-file=STRING
file with percentile, required for normalize data before NDI compute. Default is reading installed files in system path
--extreme-format=STRING
format of input file for extreme: 'native' for vol7d native binary file, 'BUFR' for BUFR file, 'CREX' for CREX file (sparse points) [default=BUFR]
-s STRING, --start-date=STRING
if input-format is of database type, initial date for extracting data; with format iso YYYY-MM-DD hh:mm:ss.msc where characters on the right are optional [default=]
-e STRING, --end-date=STRING
if input-format is of database type, final date for extracting data; with format iso YYYY-MM-DD hh:mm:ss.msc where characters on the right are optional [default=]
-n STRING, --network-list=STRING
if input-format is of database type, list of station networks to be extracted in the form of a comma-separated list of alphanumeric network identifiers [default=]
-v STRING, --variable-list=STRING
if input-format is of database type, list of data variables to be extracted in the form of a comma-separated list of B-table alphanumeric codes, e.g. 'B13011,B12101'; if omitted means all [default=MISSING]
--anavariable-list=STRING
if input-format is of database type, list of station variables to be extracted in the form of a comma-separated list of B-table alphanumeric codes, e.g. 'B01192,B01193,B07001'; if omitted means all [default=MISSING]
--attribute-list=STRING
if input-format is of DB-all.e type, list of data attributes to be extracted in the form of a comma-separated list of B-table alphanumeric codes, e.g. 'B33196,B33197'; for no attribute set attribute-list to empty string '' ; if attribute-list is not provided all present attributes in input will be imported [default=MISSING]
--level=STRING
if input-format is of database type, vertical level to be extracted in the form level1,l1,level2,l2 empty fields indicate missing data, default is all levels in database [default=,,,]
--timerange=STRING
if input-format is of database type, timerange to be extracted in the form timerange,p1,p2 empty fields indicate missing data, default is all timeranges in database [default=,,]
--ana=STRING
if input-format is of database type, ana to be extracted in the form lon,lat,ident empty fields indicate missing data, default is all ana in database [default=,,,]
--set-network=STRING
if input-format is of database type, collapse all the input data into a single pseudo-network with the given name, empty for keeping the original networks [default=]
--disable-qc
disable data removing based on SIMC quality control.
-d, --display
briefly display the data volume imported, warning: this option is incompatible with output on stdout.
--comp-filter-time
filter the time series keeping only the data selected by comp-start comp-stop comp-step comp-cyclicdatetime
--comp-cyclicdatetime=STRING
date and time in the format \c TMMGGhhmm where any repeated group of char should be / for missing. Take in account only selected ten_days_period/month/day/hour/minute. You need it to specify for example every january in all years or the same time for all days and so on
--comp-stat-proc=STRING
statistically process data with an operator specified in the form [isp:]osp where isp is the statistical process of input data which has to be processed and osp is the statistical process to apply and which will appear in output timerange; possible values for isp and osp are 0=average, 1=accumulated, 2=maximum, 3=minimum, 254=instantaneous, but not all the combinations make sense; if isp is not provided it is assumed to be equal to osp [default=]
--comp-step=STRING
length of regularization or statistical processing step in the format 'YYYYMMDD hh:mm:ss.msc', it can be simplified up to the form 'D hh' [default=0000000001 00:00:00.000]
--comp-start=STRING
start of regularization, or statistical processing interval, an empty value means take the initial time step of the available data; with format iso YYYY-MM-DD hh:mm:ss.msc where characters on the right are optional [default=]
--comp-stop=STRING
stop of filter interval, an empty value means take the ending time step of the available data; with format iso YYYY-MM-DD hh:mm:ss.msc where characters on the right are optional [default=]
--comp-keep
keep the data that are not the result of the requested statistical processing, merging them with the result of the processing
--comp-full-steps
compute statistical processing by differences only on intervals with forecast time equal to a multiple of comp-step, otherwise all reasonable combinations of forecast times are computed
--comp-frac-valid=REAL
(from 0. to 1.) specify the fraction of input data that has to be valid in order to consider a statistically processed value acceptable; for instantaneous data the criterion is the longest time between two contiguous valid data within comp-step interval following this rule: longest=comp-step/(comp-frac-valid*999 +1) thus comp-frac-valid == 0 => longest=comp-step, comp-frac-valid == 1 => longest=comp-step/1000) [default=1.00000000]
--comp-sort
sort all sortable dimensions of the volume after the computations
--comp-fill-data
fill missing istantaneous data with nearest in time inside comp-fill-tolerance
--comp-fill-tolerance=STRING
length of filling step in the format 'YYYYMMDDDD hh:mm:ss.msc', it can be simplified up to the form 'D hh' [default=0000000001 00:00:00.000]
--pre-trans-type=STRING
transformation type (sparse points to sparse points) to be applied before other computations, in the form 'trans-type:subtype'; 'inter' for interpolation, with subtypes 'near', 'linear', 'bilin'; 'polyinter' for statistical processing within given polygons, with subtype 'average', 'stddev', 'max', 'min'; 'metamorphosis' with subtypes 'coordbb', 'poly' for selecting only data within a given bounding box or a set of polygons; empty for no transformation [default=]
--trans-level-type=INT[,INT...]
type of input and output level for vertical interpolation in the form intop,inbot,outtop,outbot, from grib2 table; inbot and outbot can either be empty (single surface) or equal to the corresponding top value (layer between 2 surfaces)
--trans-level-list=INT[,INT...]
list of output levels (or top surfaces) for vertical interpolation, the unit is determined by the value of level-type and taken from grib2 table
--trans-botlevel-list=INT[,INT...]
list of output bottom surfaces for vertical interpolation, the unit is determined by the value of level-type and taken from grib2 table
--post-trans-type=STRING
transformation type (sparse points to grid) to be applied after other computations, in the form 'trans-type:subtype'; 'inter' for interpolation, with subtype 'linear'; 'boxinter' for statistical processing within output grid box, with subtype 'average', 'stddev', 'max', 'min'; empty for no transformation; this option is compatible with output on gridded format only (see output-format) [default=]
--ilon=REAL
longitude of the southwestern bounding box corner for pre-transformation [default=0.00000000]
--ilat=REAL
latitude of the southwestern bounding box corner for pre-transformation [default=30.0000000]
--flon=REAL
longitude of the northeastern bounding box corner for pre-transformation [default=30.0000000]
--flat=REAL
latitude of the northeastern bounding box corner for pre-transformation [default=60.0000000]
--ielon=REAL
longitude of the southwestern bounding box corner for import [default=MISSING]
--ielat=REAL
latitude of the southwestern bounding box corner for import [default=MISSING]
--felon=REAL
longitude of the northeastern bounding box corner for import [default=MISSING]
--felat=REAL
latitude of the northeastern bounding box corner for import [default=MISSING]
--comp-qc-ndi
enable compute of index (NDI) for use by Quality Control.
--comp-qc-perc
enable compute of index (percentile) for use by Quality Control.
--comp-qc-area-er
enable compute of quality control index (percentile/NDI) only over Emila Romagna area.
--output-format=STRING
format of output file, in the form 'name[:template]'; 'native' for vol7d native binary format (no template to be specified); 'BUFR' and 'CREX' for corresponding formats, with template as an alias like 'synop', 'metar', 'temp', 'generic', empty for 'generic', the special value 'generic-frag' is used to generate bufr on file where ana data is reported only once at beginning and data in other bufr after; 'grib_api' for gridded output in grib format, template (required) is the path name of a grib file in which the first message defines the output grid and is used as a template for the output grib messages, (see also post-trans-type), 'netcdf' for netcdf file following cf convenction 1.1 ; 'geojson' for geojson format (no template to be specified); 'csv' for formatted csv format (no template to be specified) [default=native]
--csv-column=STRING
list of columns that have to appear in csv output: a comma-separated selection of 'time,timerange,level,ana,network,var,value' in the desired order [default=time,timerange,ana,level,network]
--csv-loop=STRING
order of looping on descriptors in csv output: a comma-separated selection of 'time,timerange,level,ana,network,var' in the desired order, all the identifiers must be present, except 'var', which, if present, enables looping on variables and attributes as well [default=time,timerange,ana,level,network]
--csv-variable=STRING
list of variables that have to appear in the data columns of csv output: 'all' or a comma-separated list of B-table alphanumeric codes, e.g. 'B10004,B12101' in the desired order [default=all]
--csv-keep-miss
keep records containing only missing values in csv output, normally they are discarded
--csv-norescale
do not rescale in output integer variables according to their scale factor
--csv-header=INT
write 0 to 2 header lines at the beginning of csv output [default=2]
--geojson-variant=STRING
variant of geojson output, accepted values are 'simple' and 'rich' [default=simple]
--output-variable-list=STRING
list of data variables you require in output; if they are not in input they will be computed if possible. The output_variable_list is expressed in the form of a comma-separated list of B-table alphanumeric codes, e.g. 'B13011,B12101' [default=]
--rounding
simplifies volume, merging similar levels and timeranges
-h, --help
show an help message and exit
--version
show version and exit