Hi, all.

I'm a new using cds tool. so I have been referring to several scripts on this site.

But, the running time is too long.

Is it most efficient to download each variable per level?

I have to download all variables at all levels for 20 years.

how can I do it quickly?


sorry for my bad eng

thanks for reading



here is my script

daily_era5_prs.py


when i generate

2023-11-16 19:39:56,653 INFO Request is queued
2023-11-16 19:41:53,024 INFO Request is running
2023-11-16 19:42:51,044 INFO Request is completed
2023-11-16 19:42:51,044 INFO Downloading ((file))
2023-11-16 19:43:27,703 INFO Download rate 859.3K/s 
2023-11-16 19:43:27,993 INFO Welcome to the CDS
2023-11-16 19:43:27,993 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/tasks/services/tool/toolbox/orchestrator/workflow/clientid-d0012bb4d97a4c679e1263cb3aa20abd
2023-11-16 19:43:28,339 INFO Request is queued


I'm generating scripts for each year

so when I generate just one year, then the time for queued→running is about 2 min.

but 20years, then the time for queued→running is about 28 min.


how can I efficient download ?



2 Comments

  1. help me!!! I'm serious... 
    can you recommand any method??

  2. It's generally much faster to download shorter length of time (like one month) and then combine the NetCDF files yourself. Running multi-threaded script (e.g. https://realpython.com/intro-to-python-threading/) helps so that at least your request is in the queue - this is how we normally do it. Your download rate also seems rather low so when the files are downloaded in parallel, it would also help doing things in parallel.