![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Multiple gigabytes a second, almost half a million batches out for processing at any given time. A few days back they passed one *petabyte* of data, and now they're at about one and a third.
(The list of participants is so long that when I try to view the full list the tab crashes, so I don't actually know how much of it is mine. A relatively tiny amount, I'm sure--I'm only running one batch at a time, so as not to overwhelm my bandwidth or processor--but some.)
(The list of participants is so long that when I try to view the full list the tab crashes, so I don't actually know how much of it is mine. A relatively tiny amount, I'm sure--I'm only running one batch at a time, so as not to overwhelm my bandwidth or processor--but some.)
no subject
Date: 2019-04-02 09:30 pm (UTC)Godspeed to you all. ♥
no subject
Date: 2019-04-03 04:09 pm (UTC)Now that the tracker isn't constantly updating with new results, my browser was able to handle loading the full list. I did 102 GB (2,450 batches). Since (from what I can tell) for most of the project's life the limiting factor on speed was number of available processing nodes, and given that the project did not complete in time, I expect most of that 102 GB wouldn't have been scraped without me. It's nice to know you've made a difference to something you care about.
(I never used Google+ myself, but it's the principle of the thing, you know?)
My node has now switched back to tracing where URL shorteners link to (since if a URL-shortening service fails it won't be able to tell you the redirects itself anymore), which is the default project when nothing more pressing is going on. The limiting factor on URL-tracing is pretty much never the number of processing nodes, so they don't *really* need me for this, but I like to keep it running so that it can immediately kick into action when a new project starts. I didn't notice for a couple of days that the Google+ project had started, but my node did.