Log in

No account? Create an account
Client error: repeated requests - LogJam [entries|archive|friends|userinfo]

[ website | LogJam ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Client error: repeated requests [May. 31st, 2007|12:05 am]


I am running LogJam 4.5.3 on Ubuntu Fiesty (7.04). I am a paid member of LiveJournal.

When I click on Journal | Synchronize Offline Copy..., it begins to download the journal, then I receive the following error message:
"Client error: Client is making repeated requests. Perhaps it's broken?"

When I search the journal, it looks like the first 100 entries are available. Therefore, I assume that some rate limiting is happening somewhere which is stopping me from downloading the entire journal, and the client is just displaying the error message from the server. I have searched through the normal options (not console-only options), and haven't seen anything that seems like it would be causing this. Also, I figure that if any rate limiting did go on, that as a paid user, I'd be exempt.

Any clue what's going on, and how to get around it?

Also, the Offline Sync feature seems to be (operative phrase) poorly documented. Either that, or my Google-fu is lacking. I don't have any information on what it's downloading (entry text, comments, etc.), nor how it's stored, or how I can get it into a more useful HTML format. So any help in that direction would be appreciated as well.

Thanks in advance!

From: evan
2007-05-31 07:18 am (UTC)
Try waiting a day and running sync again... it ought to work, maybe?

All the sync stuff is poorly documented and implemented, unfortunately. The output format is a sqlite database, in ~/.logjam somewhere. (I actually forget -- it's been that long since I've worked on LogJam!) I think ~/.logjam/servers/livejournal.com/users/yourusername/journal.db or something like that.
(Reply) (Thread)
[User Picture]From: klfjoat
2007-05-31 08:39 am (UTC)
When I looked, I found the entries in a similar spot. However, they were broken up into directories by years, and each year had a number of XML files in them. The XML files each held a number of entries. No rhyme or reason I could see for the name of the XML files.

I don't know if maybe that's what it downloads before it makes the sqlite db, or what. I'm no programmer, I just know enough about tech to explain stuff and be dangerous. :-)

Then let me ask a different question... I've seen these "download your journal" Windows programs out there, but I certainly don't trust them with my LJ PW (don't know if they're sending it elsewhere). Do you know of any Linux-compatible open-source or public domain script or anything I can use to download my entire journal, comments and all?
(Reply) (Parent) (Thread)
From: evan
2007-05-31 03:38 pm (UTC)
I (more recently than LogJam) wrote this, which is what I use instead of LogJam:
(Reply) (Parent) (Thread)
[User Picture]From: klfjoat
2007-05-31 05:02 pm (UTC)
WOW. NICE. Just from reading over what you've been able to implement, it looks great. And using FUSE to allow filesystem access to posts seems cool.

Rock on. I've only recently gotten into Ruby. Hopefully, your code is well-documented. :-)

(Reply) (Parent) (Thread)
[User Picture]From: rhialto
2007-06-08 02:46 pm (UTC)
Looks interesting! But can it work without a database? (I don't like databases, I want plain text)
(Reply) (Parent) (Thread)