Tools & Techniques

> Any CAEBM project has to start with setting up the specific list of parameters for the problem at hand. First the desired Output Parameters (OP), then the Input Parameters (IP) have to be found, and all of them have their (supposed) name, dimension, and range of values. 


> Availability of "Big Data" is a necessary but not sufficient basis to setup good and sufficient examples. With respect to the parameter space to be covered, additional data from available data sources and missing parameter values for "damaged examples" have to be added. Then, if appropriate, the data for verbal parameters have to be transformed to numerical ones, and all raw data for convenience are normalized then to show comparable value ranges in every dimension. Computers of course are a big help to fullfil these tasks mostly automatically.


> As "models", we use a special type of neural networks, which's representation capability is unlimited (see eg the "universal approximation theorem" at Wikipedia), and which show high scalability in size (= complexity of the problem at hand) and accuracy of the resulting models at the same time. Training of the neural networks, including definition of their size is done with the aid of appropriate Genetic Algorithms, which ensure, that the actual complexity of the relationships between all of the parameters is met, and strong generalization requirements can be fullfiled. - The deployment of computers again makes this modeling possible.


> As "generalization" and "quality assurance" are most important aspects of the modeling process, we deal very carefully with those tasks. Generalization is addressed by "rotating subsets" of training and test examples, resulting in clear info about the complexity of the underlying problem and the stability of it's modeling. And of course any meaningful partial derivativa of one or two parameters can be used to be compared to  (low dimensional) practical experiences of "local experts". Additionally, we use a special "confidence indicator" for any single answer drawn from the models at deployment time, to ensure, that the extrapolation capabilities of the models are not over-strained. - And of course again: Computers are the tools of choice to make these generalization and QA measures possible.