Traditional depth imaging velocity model building (VMB) is usually a top-down approach, run in a start-stop fashion. It typically includes the following: first, determine the constraints for the inversion and then run the inversion, before checking the results, conditioning the results, and applying the model. This continues in a rinse-and-repeat manner, working down the data until the VMB is complete. In general, model 1 becomes model 2, which becomes model 3, and so on. In practice, models 2a and 2b may have been work that was attempted and discarded before arriving at model 3. It is a highly non-linear and fitful approach, prone to errors because of the single in-out model approach, and heavily dependent on manual intervention. Consequently, VMB can be a slow process, especially in more challenging geological environments.
Monte Carlo simulations use random sampling to solve problems where the solution is not sufficiently defined. When performing an inversion for velocity model building we use observations drawn from the data to infer the values of the true earth model. However, most inversion-based velocity model-building methods are under-determined, meaning the solution is non-unique because the observations are insufficient to constrain the inversion to the singular, correct answer. Monte Carlo simulations of the model space accommodate some of these limitations.
The schematic below describes how hyperModel works. The number of models used in the Monte Carlo simulation (Perturbation i) and the number of loops the process uses (n Loops) depends upon the data - both the starting model accuracy (Initial Model M1) and the seismic data quality (Input data). The simplified workflow creates a velocity model in a global sense by minimizing manual intervention, as the protracted ‘start-stop’ of classical model building is replaced by a compute-intensive one determined using optimized and parallelized resources. Limiting manual intervention enables the user to focus on the convergence of the model to a desired result, which is achieved in a much-reduced timeframe by shifting the operation to a compute-demanding procedure.