FESTA handbook Data Analysis and Modelling
Back to FESTA Handbook main page: FESTA handbook
9. Data Analysis and Modelling
9.1 Introduction
The strategy and the steps of data analysis need to be planned in order to provide an overall assessment of the impact of a system from the experimental data. Despite the appearance of Data Analysis as one of the latter stages in the FESTA V, experts have stressed through FOT-Net 2 that planning for the later analysis work should be done from the start and analysts need to be involved in the research questions to ensure the integrity of the study. Data analysis is not an automatic task limited to some calculations algorithms. It is the place where hypotheses, data and models are confronted. There are three main difficulties:
- the huge and complex amount of data coming from different sensors including questionnaires and video, that needs to be processed;
- the potential bias about the impact of the system(s) on behaviour which may arise coming from sampling issues including location of the study, the selection of a relatively small sample of drivers, etc.;
- the resort to auxiliary models such as simulation models to extrapolate from the behavioural effects estimated and tested within the sample to effects at the level of the whole transport system.
To be confident of the robustness of the outputs of the data analysis, one has to follow some strategic rules in the process of data analysis and apply to the whole chain and to its five links (Figure 9.1) the required techniques such as applying appropriate statistical tests or using data mining to uncover hidden patterns in the data[1].
Figure 9.1: Block diagram for the data analysis
Some specific actions are required to tackle the difficulties mentioned above and to ensure the quality and robustness of the data analysis:
- A pilot study is a prerequisite to check the feasibility of the chain of data collection and evaluation and to achieve a pre-evaluation of the usefulness of the system.
- The data flow has to be monitored in detail but also overall. One of the strategic rules to follow is to ensure local and global consistency in the data processing and data handling and analysis.
- The sources of variability and bias in the performance indicators have to be identified, where feasible, in order to control for them in the data analysis.
- There is a crucial need for an integrative assessment process which should ideally combine within a meta-model information gathered on the usability, usefulness and acceptability of the system with the observed impacts of the system on behaviour. The estimated effects obtained from the sample of drivers and data have to be extrapolated using auxiliary models to scale them up.
- Appropriate techniques have to be applied for each link of the chain — data quality; data processing, data mining and video analysis; performance indicator calculation; hypothesis testing; and global assessment. The techniques come from two set of statistical and informatics tools belonging to two main kinds of data analysis: exploratory (data mining) and confirmatory or inferential (statistical testing).
9.2 Large Data-set handling
An FOT often collects so much data that there are not enough resources and time to analyse all data in the timeframe of the FOT project. There are different choices when it comes to selection of data for analysis.
An option is to take the "space mission" approach in which as much data as possible are collected, because the FOT provides a unique opportunity (and funding) to collect data which may be hard to collect later on. This approach gives a rich dataset, that enhance the probability that the data will be re-used in future projects. However, before starting data collection, it is recommended to develop a plan on how to store the data and how to make it available for later analysis or analysis by others. This plan should specify detailed data dictionaries, open software formats, rules for data access and other relevant information as meta-data.
Analysis later on and by others (in other words, re-using data from other projects) is a good idea, thus reducing the need for expensive and time-consuming data collection phase. The researchers should though be aware that data may become out of date, because traffic, vehicles, driver support and information systems change. Therefore data, which is collected today, might not be of much relevance in ten years time, because of the changed environment and driver behaviour. However, although the context may change, the fundamentals of driving behaviour do not. Therefore whether it is possible to re-use data fruitfully, depends on what is wanted to be known about driving with a support of information system. New projects should be aware that sponsors and stakeholders may want to have fresh data but the interest for global re-use of data from another project is increasing.
The opposite approach is to collect only a minimum set of relevant data or to trigger data collection for the specific events of interest. Limiting data to specific events may have the consequence that it is not possible to look at generalised behavioural side-effects. Selection of data should be driven in the first place by the research questions that needs to be answered. With limited resources it may be useful to find a compromise between an explorative study with naturalistic driving and a more strict experimental study in which the expected behaviour of drivers and systems are evoked in a more condensed manner, requiring less time and providing more focussed data. Usage of this selected data for other purposes and projects might not be feasible as the selected data has been collected for certain research questions. Even for the later analysis the specification of the relevant data can be changed (e.g. threshold for an event) because of new findings within the analysis. An adaptation of these selected data will not be possible, because of missing data.
To make analysis more efficient, it is recommended to take a layered approach to data analysis, making sure that first those data are selected that are needed to provide information on the research questions before going into a detailed analysis. Moreover it needs to be checked, whether the selected data are appropriate to perform the analysis before starting the actual data analysis.
It should be kept in mind that, very often in the end, the complete data set cannot be analysed due to
- delays,
- missing data,
- bad data quality,
- budget restrictions,
- limited time,
- restricted access to gathered data in database
The lack of resources to analyse all data is usually the lack of human resources, and not a problem of computational resources. Thus methods for automation of the analysis are needed in order to increase especially the processing of data (e.g. recognition of events). The analysis of video data is generally a time consuming task, which should be considered from the beginning with respect to planning. Data mining methods are important to tackle this problem. An additional problem with resources is that data analysis comes late in a project. If delays occur in the data collection phase, which is often the case, the phase of data analysis may have to be shortened and resources will be diminished. It is therefore important to plan the data-analysis from the beginning of the project.
The processed data for analysis is generally stored in databases. The performance of the databases decreases with the amount of stored data. Thus intelligent approaches on data storage need to be applied, in order to avoid unnecessary processing time. Data sets for the analysis may be defined in advance as part of the data acquisition scheme and then processed before storage into the databases.
9.3 Consistency of the chain of data treatment
There will be a lot of computations and data flows starting from the measurements collected into the database through estimation of performance indicators to the testing of hypotheses and on to the global assessment. This process, in the form of a chain of operations, has to be monitored in detail but also overall. There are five operations linked together in terms of data treatment. In addition, three kinds of models are needed as support to carry out the three top operations: probability models for justifying the calculations of the performance indicators, integration models to interpret in a systemic way the results of the test, and auxiliary models to assess the effects on a larger scale (scaling up). Moving from the data to an overall assessment is not only a bottom-up process; it also has to include some feedbacks (Figure 9.2). There are two movements along this chain: a data flow going up and a control feedback loop from the top which concerns the consistency of the evaluation process and which mainly depends on control of uncertainty.
In moving up the chain, the consistency of each operation can be checked locally according to the specifications which are governed by the nature of the performance indicators which correspond to a set of hypothesis related to the use cases of the system. For each PI, there are some rules which ensure the validity of the calculation procedures. For example, it is important to sample data which can change rapidly at a high data rate. The sampling rate must fit the variability of the variable. From a database design point of view, however, it may be easier to collect relatively static data at a high frequency.
As a complement to local consistency, a global criterion is to have sufficient sample size to get enough power to carry out the test of a hypothesis or to make an overall assessment with enough precision. This is a feedback loop coming from the top to control the uncertainty of the estimations. The precision required for measurements depends on the uncertainty of the auxiliary models, of the regression models and of the probability models.
Figure 9.2: Deployment of the chain with feedbacks and additional models
9.4 Precision in sampling
The aim is to measure the effect of an intervention or treatment — which in the case of an FOT is the use of a system or systems — on a sample of subjects and in various driving situations while controlling for external conditions. From the sample, we have to infer the effect on the population by aggregating the values obtained through the sensors without and with the system to get an estimate on the effect on the chosen PI. How to insure that this inference is valid, in other words that the estimation is very near the true effect in the population? The precision of the estimate depends on the bias and variance which could be combined to get a measure of the sampling error (Wannacott and Wannacott, 1990). To control the bias and variance, one has to rely on a well defined sampling plan using appropriate randomisation at the different levels of sampling: driver, driving situations and measurement.
Consideration should be given to identifying the possible sources of (unintended) bias and variance in the sample and either attempt to minimise or account for these in the data analysis. This is one of the most fundamental principles of statistical methods.
- Driver variation. The simple fact of the matter is that drivers vary. The range of behaviours that drivers exhibit (in terms of speed selection, headway preference, overtaking behaviour) is immense, but fortunately the variation obeys some probability laws and models. Strict randomisation procedures ensure that only the outcome that is being varied (or the outcome whose variation we are observing) is working systematically. However, strict randomisation is not usually possible or desirable[2] in an FOT . particularly when the sample sizes are relatively small. The theoretical best method is to stratify the population of drivers according to some variables or factors related to the outcome and to sample proportionally to the size of the sub-population and to the a priori variance of the outcome (e.g. speed choice). For practical reasons, a different sampling or selection procedure may be followed. In either case, it is important to be able to compare the sample to the overall driver population in order to identify what are the main discrepancies and to assess possible sources of bias.
- Driving situation variation. There will be variation within and between the journeys and the driving situations within these journeys. For example a particular journey may be affected by congestion part-way through, or weather conditions may change from day to day. This type of variation cannot be controlled and is considered to be random. The observation period should be sufficiently long to allow for these random effects. One example here is that seasonal effects should be considered.
- Measurement variation. Once in a driving situation, by means of the sensors, we get a series of measurements at a certain frequency. Their size is not fixed but varies. Each set of measurements within a driving situation constitutes a sample of units taken from a cluster, according to sampling theory. Usually, there is a correlation between the measured outcomes. The information coming from this sample of measurements is not as rich as expected from an independent sample. One such cluster is at the driver level — the data collected from one driver is not independent.
How to quantify the variance of the estimate of an outcome from the experimentation taking into account theses three sources of variations? The total variance of the average of the indicator on the sample breaks down into an inter-individual, intra-individual and infra-situational variance. If the inter-individual variance is strong, an increase in number of situations observed and in the measurement points per situation will not bring any precision gains (Särndahl et al, 1992). However, it may help to ensure a reduction in bias from, for example, seasonality.
9.5 Requirements for integration and scaling up
Having treated and aggregated the data by means of statistical models, there are two kinds of problems to solve related to first the synthesis of the outputs and second to the scaling up of the results from the sample to a larger population. Integration of the outputs of the different analysis and hypothesis testing requires a kind of meta-model and the competences of a multidisciplinary evaluation team (Saad, 2006). Scaling-up relies upon the potential to extrapolate from the performance indicators to estimates of impact at an aggregate level.
It is often necessary to employ quantitative models from previous studies to estimate the effect of indicator in question. It is, however, important to note that individual models have usually been developed for particular purposes, from particular data and with specific assumptions. However, in the absence of appropriate models available for the purpose of study, it is usually necessary to apply the “least bad” model available with appropriate weighting or adjustment.
It is also important to consider the constraints, assumptions, and implications behind the design of the study in mind when interpreting the analysis results. Behavioural adaptation may lead to side effects (i.e. indirect effects) and also result in a prolonged learning process. However, the study period may, for practical reasons, not be sufficiently long to fully explore this.
Extrapolating from the sample to the population depends on the external validity of the experiment. The power of generalisation to the population of the estimates of impact is related to their precision which is composed of two parts — bias and variance. We can use three approches:
- If the required performance indicator is available in the sample (e.g. if journey time is an impact of choice for efficiency and journey time has been collected), the impact at the population level can be calculated directly, although sometimes a correction factor or other form of extraplotaion adjustment may have to be introduced (Cochran, 1977).
- If neither a performance indicator nor a proxy indicator is available, then it is necessary to adopt an indirect approach through models which provide an estimate of the output from the behavioural performance indicator estimated from the sample. Speed changes can be translated into changes in crash risk by applying statisticaly derived models from the literature which have investigated the relationship between mean speed, speed variance or individual speed and crash risk. Emissions models can be used to calculate the instantaneous emission of a car as a function of its recorded speed and gear selected.
- Finally a macroscopic or microscopic traffic simulation model can be applied to translate the effects observed in the sample to a network or traffic populations effect. The outputs from such a simulation can for example, be used to calculate journey time effects or fuel consumptions effects at the network level.
For scaling up, the sample that is chosen for the FOT is very important. For example, if the FOT was carried out with mainly male participants between the age of 25 and 45, the results can in principle not extrapolated directly to the whole population of drivers. Getting a representative sample of the whole population is impossible. However, it is acceptable to have an imperfect sample, as long as the limitations of the sample are known and they are described in the end results. It is still desirable to try to have as representative as possible (gender and age are important).
More detailed information about scaling up can be found in the next section.
In the Amitran project work has been done on scaling up (methodology and data protection). More information can be found in (Mans et al., 2013)
9.5.1 Scaling up methods
There are two methods of scaling up that can be used (D. Mans, E. Jonkers, I. Giannelos, D. Palanciuc, Scaling up methodology for CO2 emissions of ICT applications in traffic and transport in Europe, ITS Europe congress in Dublin, 2013. The first method is a direct method, using statistical data information. The second method is performed through modelling using a macroscopic (multimodal) traffic model on EU-27 level. The choice of scaling up method is based, among others, on the availability of models and the type of effects expected.
The methods described in the following explain how scaling up can be applied theoretically. In practice, scaling up is a big challenge. It is important to consider the goal one wants to achieve.
9.5.1.1 Scaling up using statistics
The scaling up method using statistics, initiates from the impacts on CO2 emissions at a local level as distinguished for different situations (such as traffic state, vehicle type, etc.), coming from the FOT. In case it is not possible to directly use the local effects of the system to scale up with the use of appropriate statistical datasets, then the use of models (e.g. microscopic traffic models) is necessary to transpose this impact to a more appropriate format for scaling up.
The definition of situations depends on:
- the system characteristics
- the situational variables that are expected to have the largest impact (e.g. a night vision system will only be active during driving in the dark)
- the possibility of measuring the different situations and the model capabilities.
Data for the same situations is needed on the large scale level that is targeted. Then, the impact on a local scale is scaled up using statistical data (for example kilometres driven for the different road types) under the specific situations.
Scaling up using statistics is applicable when interaction and second-order effects (i.e. latent demand induced by the improvement of the service level, caused by a system) can be expected to be insignificant, or when there is a clear effect at certain traffic situations for which data on higher level are available, or even at the mere event that no appropriate macroscopic model is available to perform the model-based methodology. A drawback of this method is that data sets need to be available for the countries, one wants to scale up to. At present there is very limited measurement data for some countries in Europe plus, a (software) tool for this approach does not exist yet. The Amitran Project is collecting statistics for scaling up from various European countries in a knowledge base. This knowledge base will be made public around Summer/Autumn of 2014.
9.5.1.2 Scaling up using a macroscopic (multimodal) traffic model
The network of a macroscopic (multimodal) traffic model determines the level on which the results are calculated. Ideally the model is available on country or EU level. Scaling up using such a model can be done in two different ways:
- The calculation of the impact is done with a model other than the macroscopic traffic model. The local effects of the system are in this case determined (e.g. via a microscopic simulation tool). These effects can be used as input for the macroscopic model on country/EU level. One run is performed for deriving the direct effect to a larger scale.
- The calculation of the impact is performed directly with a microscopic traffic model. In this case, should the model be at the required level (country/EU), the direct effect of the system is calculated. This can be done performing a run of the macroscopic model. A limitation to this approach is that microscopic effects of ITS cannot be taken into account, e.g. changes in driver behaviour. Therefore, it can only be used to determine the effects of ITS that mainly affect macroscopic mechanisms in the network, such as mode or route change.
Optionally (for both cases), the economic effect can be calculated with an appropriate model. Then a second run is performed with the macroscopic model to account for the second order effect.
Scaling up using a macroscopic model is a good method to apply when second-order effects are expected and/or when the effects of the ITS system can be used directly as an input parameter for the macroscopic model. Also, this method can be used only if such a large scale model is available. Being a more elaborate method than scaling up using statistics, it allows taking into account specific circumstancial differences especially if there are interaction effects. A downside of scaling up with a macroscopic traffic model is that urban roads are usually not part of the network on such a large scale, and also that it requires more effort than scaling up using statistics.
9.6 Appropriate techniques at the five links of data analysis
The five links follow the right branch of the development process of an FOT from data quality control to global assessment. Different techniques of data analysis and modelling which could be used at each step are presented here.
9.6.1 Step 1: data quality analysis
Data quality analysis is aimed at making sure that data is consistent and appropriate for addressing the hypothesis of interest (FESTA D3, section 4.5). Data quality analysis starts from the FOT database and determines whether the specific analysis that the experimenter intends to perform on the data to address a specific hypothesis is feasible. Data quality analysis can be performed by following the four sub-steps reported below (and shown in Figure 9.3). A report detailing the quality of the data to be used to test the hypothesis of interest should perhaps be created.
The sub-steps for data quality analysis are:
- Assessing and quantifying missing data (e.g. percentage of data actually collected compared to the potential total amount of data which it was possible to collect).
- Ensuring that data values are reasonable and units of measure are correct (e.g. a mean speed value of 6 may be unreasonable unless speed was actually recorded in m/s instead of km/h).
- Checking that the data dynamic over time is appropriate for each kind of measure (e.g. if the minimum speed and the maximum speed of a journey are the same, then the data may not have been correctly sampled).
- Guaranteeing that measures features satisfy the requirements for the specific data analysis (e.g. in order to calculate a reliable value of standard deviation of lane offset, the lane offset measure should be at least 10s long; additionally, this time length may depend on the sampling rate — see AIDE D2.2.5, section 3.2.4).
Please, notice that the first three sub-steps refer to general quality checks; thus, if any of these fails, data analysis cannot proceed. If a failure is encountered, it should then be reported to those responsible for the database responsible so that the possible technical error behind can be tracked down and solved. However, the last sub-step is different, and is related to the specific analysis or to a specific performance indicator to be used in the subsequent data analysis. As a consequence, if step 4 fails, it may not be due to a technical issue that needs to be solved, but to intrinsic limitations in the collected data.
Figure 9-3: Block diagram of data quality analysis
Data quality analysis is handled differently with regard to data from in-vehicle sensors (generally CAN data and video data) and subjective data (generally from questionnaires). Subjective data, once collected, is hard to verify unless the problem stems from transcription errors.
9.6.2 Step 2: data processing
Once data quality has been established, the next step in data analysis is data processing. Data processing aims to “prepare” the data for addressing specific hypothesis which will be tested in the following steps of data analysis. Data processing includes the following sub-steps: filtering, deriving new signals from the raw data, event annotation, and reorganisation of the data according to different time scale (Figure 9.4). Not all the above-mentioned sub-steps of signal processing are necessarily needed for all analyses. However, at least some of them are normally crucial.
Figure 9-4: Block diagram for the procedure of data processing
Data filtering can involve a simple frequency filter, e.g. a low-pass filter to eliminate noise, but also any kind of algorithm aimed at selecting specific parts of the signals. Very often a new signal more suitable for the hypothesis to be tested has to be elaborated by combining one or more signals. Marking specific time indexes in the data, so that event of interest has been recognized, is fundamental to individuate the part of data which should be analyzed. Ideally, an algorithm should be used to go through all FOT data and mark the event of interest. However, especially when the data to be annotated is from a video and requires the understanding of the traffic situation, writing a robust algorithm can be very challenging even with advanced image analysis techniques and manual annotation from an operator may be preferable. Re-organising data into the most suitable time scale for the specific hypothesis to be addressed has to be considered in the following steps of the data analysis.
9.6.3 Step 3: Performance Indicators calculation
There are five kinds of data which provide the performance indicators: Direct Measures, Indirect Measures, events, Self-Reported Measures and Situational Variables. The scale of the dataset and the uncontrolled variation in driving situations that occurs from driving freely with vehicles become a seriously limiting factor unless efficient calculation methodology is implemented. The choice of which performance indicators and hypotheses to calculate is clearly dependent on the amount of effort required. Efficient calculation methods need to anticipate that (a) performance indicators will be calculated on imperfect data – there is a strong need to create special solutions for “exceptions to perfect data”, and (b) performance indicators calculation requires situation or context identification — a “denominator” or exposure measures to make a measure comparable is required to determine how often a certain event occurs per something (e.g. km, road type, manoeuvre). The fact that test exposure is largely uncontrolled (not tightly controlled as in experiments) means that analysis is largely conducted by first identifying the important contextual influences, and then performing the analyses to create a “controlled” subset of data to compare with.
The ability to find and classify crash-relevant events (crashes, near-crashes, incidents) is a unique possibility enabled by FOTs to study direct safety measures. As reported in 5.3.3, this possibility should be exploited by using a process of identification of critical events from review of kinematic trigger conditions (e.g. lateral acceleration >0.20 g). The definition of these trigger values and the associated processes to filter out irrelevant events are of particular importance for enabling efficient analyses.
Care should be taken to use appropriate statistical methods to analyse the performance indicators. The methods used must consider the type of data and the probability distribution governing the process. Categorical or ordinal data, such as that from questionnaires, needs to be analysed appropriately. Data on the degree of acceptance of a system (e.g. positive, neutral, negative) can be applied in multivariate analysis to link it to behavioural indicators so as to create new performance indicators.
9.6.4 Step 4: hypothesis testing
Hypothesis testing in an FOT generally takes the form of a null hypothesis: no effect of the system on a performance indicator such as 85th percentile speed, against an alternative such as a decrease of x % of the performance indicator. To carry out the test, one relies on two samples of data with/without the system from which the performance indicator is estimated with its variance. Comparing the performance indicators between the two samples with/without intervention is done using standard techniques such as a t-test on normally distributed data. Here the assumption is that there is an immediate and constant difference between the use and non-use of the system, i.e. there is no learning function, no drifting process and no erosion of the effect.
However, the assumption of a constant effect is often inappropriate. To get a complete view of the sources of variability and to handle the problem of serially correlated data, multi-level models are recommended (Goldstein, 2003).
With such models, drivers or situations with missing data have generally to be included. Elimination of drivers or situations because of missing data in order to keep complete data set may cause bias in the estimation of the impact.
It is assumed that data will have been cleaned up in the data quality control phase. Nevertheless, to be sure that the estimation will be influenced minimally by outliers, one can use either robust estimates such as trimmed mean and variance or non-parametric tests such as a Wilcoxon rank test or a robust Minimum Mean regression (Gibbons, 2003; Wasserman, 2007; Lecoutre and Tassi, 1987). Such tests provide protection against violation of the assumption of a normal distribution of the performance indicator.
9.6.5 Additional Step 4: data mining
Data mining techniques allow the uncovering of patterns in the data that may not be revealed with the more traditional hypothesis testing approach. Such techniques can therefore be extremely useful as a means of exploratory data analysis and for revealing relationships that have not been anticipated. The data collected in an FOT is a huge resource for subsequent analysis, which may well continue long after the formal conclusion of the FOT . One relatively simple technique for pattern recognition is to categorise a dataset into groups. Cluster analysis tries to identify homogeneous groups of observations in a set of data according to a set of variables (e.g. demographic variables or performance indicators), where homogeneity refers to the minimisation of within-group variance but the maximisation of between-group variance. The most commonly used methods for cluster analysis are k-means, two-step, and hierarchical clusters (Lebart et al., 1997; Everitt, 2000).
9.6.6 Step 5: global assessment
This section deals with the issue of identification of models and methodologies to generalise results from a certain FOT to a global level in terms of traffic safety, environmental effects and traffic flow. One problem when generalising results from an FOT is to known how close the participants in the FOT represent the target population. It is often necessary to control for: usage, market penetration and compliance (the system might be switched off by the driver) and reliability of the system. The process of how to go from the FOT data to safety effects, traffic flow and environmental effects is illustrated in Figure 9.5. In this process two steps need to be taken. One is scaling up the FOT results, for example to higher penetration levels or larger regions. The other is to translate the results from the level of performance indicators (for example, time headway distribution) to the level of effects (for example, effect on the number of fatalities). For each type of effect there are (at least) two different ways to generalise the results: through microsimulation or directly.
Figure 9-5: Block diagram of translating FOT indicators to large scale effects
The direct route includes both estimation directly from the sample itself and estimation through individual or aggregated models. Some advantages of the direct route are that it is rather cheap and quick. The alternative is to use a traffic microsimulation model which represents the behaviour of individual driver/vehicle units. The advantages of microsimulation are that they can be more reliable and precise and can incorporate indirect effects (such as congestion in the network at peak times).
The simulation of indirect effects allows studying interaction effects between equipped and non-equipped vehicles. This aspect is of major importance when testing a certain function in an FOT. By the means of the collected real world data only direct effects caused by the tested function in the equipped vehicle can be analysed. Effects due to interaction with other vehicles cannot be analysed, because data collection is only carried out by means of the test vehicles. Hence traffic simulations are a useful tool to investigate indirect effects in different driving situations. The indirect effects are also of interest when testing collected vehicles. The interaction between connected and non-connected vehicles provide necessary information for the assessment of safety and traffic low effects. However, microsimulations for connected vehicles require further information to provide a model for the simulation environment. Information on the communication range of the connected vehicles, information on when vehicles are connected to another vehicle, and the integration of the communication infrastructure might require more efforts as originally foreseen.
Microsimulations also allow studying certain effects or functionalities before the data collection starts, in order to get an idea of the function and its effect. Here different settings of the functions can be tested. Another possiblity to get an idea of the effects in advance is the conduction of driving simulator tests. These tests can be performed in parallel or before the real world tests. In comparison to microscopic simulations, driving simulator tests require higher efforts. Especially the participant recruitment, the conduction of the interviews after the tests as well as the analysis of the collected data (subjective and objective) require a large amount of time, which needs to be taken into account in the preparation phase. In some cases the simulations might need input from the real traffic, in order to understand how certain situations are evolving. Especially data on the relevant network and the traffic density, e.g. whether the vehicle is driving in free flow or in a car following situation are needed for traffic simulations. But also subjective data is of importance as well. For instance the question why certain drivers did not follow a certain function warning or advice might be interesting, when implementing the driver model. Interviews with drivers can be an efficient approach to understand certain driver behaviour patterns. Moreover traffic simulations are an important tool for the preparation of the results or determined effects. By means of the traffic simulations more details on the relevant network can be observed and provided, in order to understand a certain behaviour or effect.
Since traffic microsimulation models consider individual vehicles in the traffic stream, there is consequently the potential to incorporate FOT results in the driver/vehicle models of the simulation. Impacts on the traffic system level can then be estimated through traffic simulations including varying levels of system penetration into the vehicle population.
Here it is of importance to provide a detailed driver model. The driver model becomes more realistic if required details are determined by means of data gathered in real traffic. This requires in addition the the provision of a driver model for the baseline behaviour as well as the adapted behaviour in the treatment phase, due to the usage of a tested function.
Microsimulation does not necessarily yield the impact variable that is of interest. Various aggregated and individual models are necessary to convert for instance speed to safety effects (e.g. via the Power Model which considers the relationship between driving speed and the risk of an accident at different levels of severity). In addition, the modelling detail of traffic microsimulation places restrictions on the practical size of the simulated road network. Macroscopic or mesoscopic traffic models combine the possibility to study larger networks with reasonable calibration efforts. These models are commonly based on speed-flow or speed-density relationships. Large area impacts of FOT results can therefore be estimated by applying speed-flow relationships obtained from microsimulation for macro- or mesoscopic traffic modelling.
Notes
- ↑ For more detailed information the reader should refer to FESTA Deliverable 2.4[1]
- ↑ It may not be desirable, for example, to waste sample size by recruiting drivers who only drive small amounts each week. Many FOTs have for good reasons used a quota sampling procedure, in which equal numbers of (say) males and females are recruited. This can create bias when scaling up the observed data to estimates of effects at a national or European scale.