diff options
author | Nick White <hg@njw.me.uk> | 2011-08-03 00:34:35 +0100 |
---|---|---|
committer | Nick White <hg@njw.me.uk> | 2011-08-03 00:34:35 +0100 |
commit | bbce84ba6c4b7a208bf872e553abc98ed2ecfa20 (patch) | |
tree | 06cbe583c552d458ba03c061e384672c6849840c | |
parent | 6b8f282b5b0c0621ebcf6124f6a6cb90fed47297 (diff) |
Flush -p stdout per line
-rw-r--r-- | TODO | 6 | ||||
-rw-r--r-- | getgbook.c | 1 |
2 files changed, 6 insertions, 1 deletions
@@ -1,3 +1,7 @@ +got a stack trace when a connection seemingly timed out (after around 30 successful calls to -p) + +getgmissing doesn't work brilliantly with preview books as it will always get 1st ~40 pages then get ip block. getgfailed will do a better job + list all binaries in readme and what they do # other utils @@ -28,4 +32,4 @@ to be fast and efficient it's best to crank through all the json 1st, filling in this requires slightly fuller json support could consider making a json reading module, ala confoo, to make ad-hoc memory structures from json -Note: looks like google allows around 3 page requests per cookie session, and about 40 per ip per [some time period]. If I knew the time period, could make a script that gets all it can, gets a list of failures, waits, then tries failures, etc. Note these would also have to stop at some point; some pages just aren't available +Note: looks like google allows around 3 page requests per cookie session, and exactly 31 per ip per [some time period]. If I knew the time period, could make a script that gets all it can, gets a list of failures, waits, then tries failures, etc. Note these would also have to stop at some point; some pages just aren't available @@ -124,6 +124,7 @@ int main(int argc, char *argv[]) printf("%s ", page->name); if(page->num != -1) printf("%d", page->num); printf("\n"); + fflush(stdout); } free(page); } |