«Medium-Range Weather Prediction Austin Woods Medium-Range Weather Prediction The European Approach The story of the European Centre for Medium-Range ...»
The Analysis System — from OI to 4D-Var 95 In variational data assimilation one begins, as in OI, with the differences between the analysed values on the one hand, and the observed and background values on the other. Determining the adjustments to the background forecast that will minimize the sum of the weighted measures of these differences gives the analysis. The weights applied depend on estimates of the typical errors of the observations and background forecasts. They take dynamical imbalances, for example between wind and pressure fields, into account. In three-dimensional variational (3D-Var) assimilation, the differences from the observed values are somewhat artificially assumed to be valid at specific analysis times (usually the “synoptic hours” of 00, 06, 12 or 18UTC). In four-dimensional (4D-Var) assimilation, the differences are processed at the time of each observation. The minimization therefore involves repeated model runs for the period over which observations are being assimilated, typically six or twelve hours. This clearly requires very large computing resources.
Development of 4D-Var was seen at the outset as especially promising because of its optimal use of the so-called “asynoptic” data measured continuously by satellite, and because variational assimilation in general opened the door to the direct use of radiance data from satellites that we will consider in Chapter 13.
Where did the concept of variational assimilation originate? We saw in Chapter 7 that in 1980 a scientist visiting from Russia, Dr Kontarev, gave several seminars on the adjoint method that had been developed by Prof Marchuk in 1974. This method allows computation of the sensitivity of any output parameter to any input parameter for any dynamical system at a reasonable cost. Olivier Talagrand, who as we have seen developed the incremental approach to OI, followed the lectures. He returned to his institute in Paris, the Laboratoire de Météorologie Dynamique (LMD), and started working on the adjoint method in collaboration with a mathematician Xavier Le Dimet. Initial experiments with a shallow-water model were unsuccessful; gravity waves generated too much noise. However he proposed further research to his students. One of them, Philippe Courtier, newly-arrived at LMD from Météo France, started to work with a filtered model, that is one that filtered out the unwanted effects of the gravity waves.
By 1985, Courtier and Talagrand had obtained results showing that they had tamed the gravity-wave noise. Now the possibility was opened to apply the variational technique to an operational NWP system.
Talagrand returned to the Centre in early 1987. With Courtier, now a staff member on secondment from Météo France, he started a feasibility study on
96 Chapter 8use of variational analysis in the Centre’s system. Their conclusion, that it would be more efficient to re-code the entire model than to write the adjoint of an old code, was not universally welcomed. However they persisted, with encouragement from Burridge. Their pioneering work resulted eventually in an award from the Academie des Sciences.
There was much work to be done before the benefits of the investment in variational data assimilation could be reaped. In October 1988, Lennart Bengtsson noted that “major efforts are required before this technique can be developed into a practical system”. This was true; 3D-Var did not become operational at ECMWF until January 1996 and almost ten years had elapsed before Florence Rabier put the finishing touches to the world’s first operational 4D-Var system, implemented at the Centre in November 1997.
Throughout this long period, Burridge, first as Head of Research, then as Director, “kept the faith”. He defended his research programme from those who queried the computing cost, and the overall feasibility, of 4D-Var. He was disappointed that the UK Met Office did not become involved, and share the workload. For him, this was “a very tough time”. He remembered the Council as being generous in its approach; it was not overly critical when quick results were not forthcoming from the long research programme. The benefits indeed took some time to become apparent; some claimed that years of research work seemed not to be producing anything useful. Eventually however Burridge was pleased that his conviction had been vindicated; it was not until the mid-to-late 1990s that it became clear that the decisions of the late 1980s to work towards 4D-Var were justified.
He noted later with satisfaction that at last “it became generally recognised that the substantial forecast improvements over the following years came largely from 4D-Var”. In the next Chapter, we will see just how much forecast accuracy improved from the late 1990s.
Burridge believes that still, at the time of writing, the potential of 4D-Var has not been fully realised. He is confident that there are “one or two more days of predictability to be gained from the Centre’s forecasting system”.
The challenge remains: to exploit fully the new data types.
The Centre was at the forefront in using these kinds of data. Operational introduction of 4D-Var has followed at other Centres. Jean-Noël Thépaut, one of the pioneers of pre-operational development of 4D-Var at the Centre, played a key role in the work leading to implementation at Météo France in June 2000, and Andrew Lorenc himself, who had returned to the UK Met Office in 1980, led work there that brought 4D-Var implementation in October 2004.
The Analysis System — from OI to 4D-Var 97 The ECMWF data assimilation system will play an important role in studies of observing system impact and observation network design, aiming at optimisation of the global observing system. The international work is coordinated through WMO, and a programme called EUMETNET Composite Observing System (EUCOS) which is run under the auspices of the Network of European Meteorological Services (EUMETNET).
The Medium-Range Model
The comprehensive atmosphere-ocean-land model developed at the Centre over the years forms the basis for the Centre’s data assimilation and forecasting activities. In other Chapters, we review the Centre’s activities in analysis, wave modelling, seasonal prediction and ensemble forecasting.
Here we will review briefly the development of the main high-resolution medium-range model.
We see in Article 2 of the Convention that inter alia the objectives of the
Centre shall be:
• to develop dynamic models of the atmosphere with a view to preparing medium-range weather forecasts by means of numerical methods;
• to carry out scientific and technical research directed towards improving the quality of these forecasts.
A model covering the globe would be required. As we have seen, the weather in mid-Pacific today can influence the weather over Europe five or six days later. Today’s weather south of the equator will influence the weather next week in the Northern Hemisphere. Besides, States in Europe have an interest in global weather: for ship-routeing, for offshore oil exploration in the southern Pacific and elsewhere, for expeditions to the Antarctic, and for many other activities.
In Chapter 7 we saw how the Centre prepared its first operational medium-range forecasts beginning in August 1979. For its time, the Centre’s model of the world’s atmosphere was sophisticated. It delivered five-day forecasts to the National Meteorological Services with average accuracy similar to that of the best of the two-day forecasts that had been available to them ten years earlier.
We saw that a grid-point model was used, in which the temperature, wind and humidity were predicted on a network of points, separated by about 200 km around the equator, but closer together in the east-west direction nearer 98 The Medium-Range Model 99 the poles. The network was repeated at 15 levels between the surface, on which pressure, as well as rain- and snowfall were predicted, and the top of the model atmosphere, which was at a height of 25 km. The lower levels were separated vertically by a few hundred metres, those aloft by a couple of kilometres. Each level had 28,800 points; the model had 432,000 grid points in total.
At the beginning, the definition of cloud in the model was perhaps by today’s standards somewhat primitive, but was nonetheless impressive.
When the humidity at a grid point exceeded 100%, stratus clouds formed.
Rain or snow would fall if the temperature was low enough or if there was enough liquid water. Convective or cumulus clouds were formed depending on the instability of the grid column and convergence of water vapour. Rain falling through the model atmosphere would evaporate in dry air.
Short-wave radiation incoming from the sun, long-wave infrared radiation from the earth to space, and multiple scattering of radiation between cloud layers, were all calculated. Absorption of heat by water vapour, ozone and carbon dioxide was taken into account as well. Computing the effects of radiation took lots of computer power, and so was done only twice each forecast day at the start.
The laws of physics tell us what moves the air around, what makes it warmer or cooler, and what makes clouds give rain or snow. The model was based on the gas law for a mixture of dry air and water vapour, the laws of conservation of mass and water, the equation for momentum and the first law of thermodynamics. Heating and cooling of the atmosphere by radiation, the turbulent transfer of heat, moisture and momentum, the thermodynamic effects of evaporation, sublimation and condensation and the formulation of rain and snow were all described.
Starting from the analysis at noon, a forecast was made of the tiny changes in wind speed and direction, temperature, and humidity at each of the 432,000 grid points for 15 minutes later at 12.15. This gave a new starting point. A new forecast was made now for 12.30, and so on until after 960 15-minute time steps the forecast to ten days was completed. For each step, seven numbers — the temperature, wind and so on — were required at two time steps at each grid point — a total of six million numbers. The fields were stored on four disks of the CRAY-1. All the data for a vertical slice of atmosphere above a line of latitude were moved from the disks to the CRAY-1 memory. The CRAY-1 would perform the calculations for this slice, return the results to disk, and then move on to the next. About 50 million calculations were made each second, and the forecast to ten days took a little less than four hours. Although the analysis cycles were run over
100 Chapter 9weekends, forecasts were run only from Monday to Friday. Weekend running of the forecast began in August 1980.
Development of the model from scratch to operational implementation was an achievement that was a source of pride to Wiin-Nielsen, and indeed to all the staff of the Centre. David Burridge had been given the task of designing the numerical scheme for the model. Burridge, Jan Haseler from the UK, Zavisa Janic from Yugoslavia and others, made their first experiments, making forecasts from low-resolution “Data Systems Test” analyses, which had been compiled for FGGE. It was soon evident that the model had the benefit of a robust and stable numerical scheme. Tony Hollingsworth, Head of the Physical Aspects section, with Jean-Francois Geleyn from France, Michael Tiedtke from Germany and Jean-Francois Louis from Belgium were largely responsible for the model physics.
A research team including David Burridge, Jan Haseler, David Dent, Michael Tiedtke and Rex Gibson went to Chippewa Falls, the Cray factory, in mid-1977 on a memorable trip. In between sometimes heated discussions between Tiedtke and Gibson, who did not always find it easy to see eye-toeye, with Burridge trying to keep the peace, Dent calmly typing away at the console, and Haseler getting some sleep under the table, the team managed to complete a one-day global “forecast” on a CRAY-1 at a speed about ten times faster than that of the CDC 6600.
By the end of the year, more predictions to ten days were being run. The scientists of the Research Department would run many thousands of numerical experiments in the years to come. Work was easier when the staff moved to Shinfield Park in late 1978, where the Centre’s CRAY-1 and CDC Cyber 175 had been installed in the Computer Hall.
Broadly, the work on modelling the atmosphere numerically to give a
forecast can be separated into:
• the analysis (or assimilation of the observations to give the initial fields from which the prediction starts); this is dealt with in the previous Chapter;
• the “physical aspects” of the model, such as modelling the processes that cause condensation of water to form clouds, rain, and snow; the consequent generation or absorption of heat, friction as the wind blows close to the surface and so on; and
• the “numerical aspects”, including modelling the movement of parcels of air, heating of air by compression and cooling by expansion, what sort of grid is best, or even if the calculations should be made not on a grid, but instead using continuous waves in a “spectral” version of the model.
The Medium-Range Model 101 Within this broad-brush description, other essential work was required.
Systems were developed to diagnose the model behaviour, and its accuracy and performance. Basic questions had to be answered. Given the power of the CRAY-1, what was best: to increase the model resolution, i.e. bring the grid points closer together, or make the physics more sophisticated? What was the best way to eliminate from the calculations those things not required for the forecast? For example, the atmosphere is suffused with gravity waves, most of which have little influence on what tomorrow’s weather will be like. A numerical model will use up lots of resources modelling these unless they are somehow eliminated.