A large volume of data (100s of TB) is downloaded every day from the CDS.
On this page we summarise:
CDS Request times: the reasons underpinning the CDS queuing time
CDS Queuing can be monitored from the 'Your Requests' or using the 'CDS Live'
The CDS retrieval times can vary significantly depending on the number of requests that the CDS has at any one time and also based on the following factors that affect EFAS and GloFAS:
- The priority of the dataset in question
- The size of the request
- The number of requests submitted by a user
- The number of requests to retrieve data from ECMWF Archive
- The number of requests requesting a specific dataset
- The number of active slots
- The size of the overall queue.
The CDS strives to deliver data as fast as possible, however, it is not an operational service and should not be relied upon to deliver data in real-time as it is produced.
Here we will try to give some context on why requests can take time:
Data for the CEMS-Floods (EFAS and GloFAS) datasets are held within MARS at ECMWF. The MARS service is a system designed for the request of GRIB files based on a disk cache and tape storage architecture. The most recent data is held on disk cache, while all available data is stored on tape. When a user requests data, the CDS places that request in a queue. Requests are prioritized by the CDS based on the factors listed above.
Once the request becomes eligible, it is passed to the MARS service at ECMWF for extraction of the relevant fields. It is only at this point that you will see your request as 'Running'
When a user selects an area of data, it does not mean that you are not retrieving the entire global dataset. Each timestep of each date of each variable is classed as an individual GRIB field. MARS extracts sub-areas of data by retrieving the entire global grid, cropping the selected area, then returning the requested area to the user.
MARS is a separate service to the CDS which also has constraints on its workload. MARS has its own QOS limits applicable to data requests, as it is a service shared across operational services (i.e. producing ERA5, GloFAS, and non-operational services such as the CDS).
The CDS service, from time to time, can experience periods of high user activity and increasing queue time for even small requests. During these times we ask you to kindly wait for the queue to be processed, as there are fixed slots available that cannot be increased.
Figure 1 shows a period of high user activity. GloFAS and EFAS products are served by the adaptor.mars.external service, you can see that the active users (blue line) are well above the 50 slots allocated to the GLoFAS and EFAS requests (green line). When the blue line falls again below the green line then the total queued users start decreasing until eventually there is no queuing time for any user request.
Figure 1
CEMS-Flood data on MARS: the size of the CEMS-Flood datasets stored on MARS and accessible through the CDS
Table 1
Request strategy: the best practices to maximise efficiency and minimise wait time
The CDS enforces constraints to the number of fields per request that are retrievable for each dataset. The reason is to keep the system responsive for as many users as possible. The consequence is that you cannot download the whole dataset in one go, but you will need to devise a retrieve strategy to, for example, loop over certain fields and retrieve the dataset in chunks.
It is also very important that, when only a part of the geographical domain is needed, the user crops the data to a region of interest (ROI). This will help keep the downloaded data size as small as possible.
In Table 2, we list the maximum fields per request that you can retrieve for each dataset and the corresponding downloaded data size, assuming that you are:
- requesting GRIB2 file format
- not cropping
- requesting the shorter time steps (6 hours vs 24 hours), when available
- requesting ensemble perturbed forecasts
We also provide a short description of the request strategy and a link to a script that you can use to perform the request.
Table 2 - Request strategy
Dataset | N. of fields per request | Request strategy | Corresponding file size per request (loop) | Link to example script |
---|---|---|---|---|
GloFAS climatology | 500 | Loop over years | 2 GB | script |
GloFAS forecast | 60 | loop over years, months, days | 8.1 GB | script |
GloFAS reforecast | 950 | loop over months, days Consider cropping to ROI | 32 GB | |
GloFAS seasonal forecast | 125 | Loop over years, months Consider cropping to ROI | 31.5 GB | |
GloFAS seasonal reforecast | 125 | Loop over years, months Consider cropping to ROI | 31.5 GB | |
EFAS climatology | 1000 | Loop over years | 450 MB | script |
EFAS forecast | 1000 | loop over years, months, days | 3.7 GB | script |
EFAS reforecast | 200 | loop over year, months, days | 2.3 GB | |
EFAS seasonal forecast | 220 | loop over months, days | 13.1 GB | |
EFAS seasonal reforecast | 220 | loop over months, days | 13.1 GB |
Whilst submitting multiple requests can improve download time, overloading the system with too many requests will eventually slow down the overall system performance.
Indeed the CDS system penalises users that submit too many requests, decreasing the priority of their requests. In short: Too many parallel requests will eventually result in a slower overall download time.
For this reason, we suggest limiting to a maximum of 10 parallel requests.