Hallo.
I want to make downloads with wget easier.
Idea is following. All links for downloads are placed in one file.
This file is parsed using grep and for each link wget is started.
If wget stopped with error, it should be started again with oprion -c (retry) and so on - untill file is downloaded completely.
I'l to write such a script, have 2 questions.
1. how can I detect result of wget execution? if it was timeout, error or success?
2. About controlling of wget processes. It will be nice if wget can save percent of downloaded data to file, that can be parsed from outside. If I redirect output or wget to file (wget ... > log.txt), it will write data each second and finally file size will be huge?
Or its better when each wget process logs to its own file and then all this files can be parsed using grep. What do U think?
I want to make downloads with wget easier.
Idea is following. All links for downloads are placed in one file.
This file is parsed using grep and for each link wget is started.
If wget stopped with error, it should be started again with oprion -c (retry) and so on - untill file is downloaded completely.
I'l to write such a script, have 2 questions.
1. how can I detect result of wget execution? if it was timeout, error or success?
2. About controlling of wget processes. It will be nice if wget can save percent of downloaded data to file, that can be parsed from outside. If I redirect output or wget to file (wget ... > log.txt), it will write data each second and finally file size will be huge?
Or its better when each wget process logs to its own file and then all this files can be parsed using grep. What do U think?