World: r3wp
[!Cheyenne] Discussions about the Cheyenne Web Server
older newer | first last |
Dockimbel 25-May-2011 [10674x2] | On the fly: that scheme doesn't work for big files (think 10MB or 100MB |
if you have to re-compress the file on each request | |
onetom 25-May-2011 [10676x3] | maybe if u r serving to a 100Mbit connection from a Pentium I... |
such automatic compression makes sense only for mid sized files, where the file needs to be seemlessly uncompressed on the other side. if the file is bigger, u want to be more specific about the compression method anyway... | |
on the other hand, i just ran a time gzip -c some.avi > /dev/null where the avi was 1.2GB and the runtime was 1m19s on my mac laptop, which is 15MByte / sec, so 150MBit/s roughly.... i use wifi most of the time, which is below 100Mbit usually... | |
Andreas 25-May-2011 [10679] | gzipping angular 0.9.15 reduces size from 330k to 86k in 0.06 seconds on my machine (with CALL/wait of gzip from within REBOL). |
Dockimbel 25-May-2011 [10680] | 60 ms for a single request is way too much. |
Andreas 25-May-2011 [10681] | so on-the-fly compression of only static *.js *.css files would probably be worth it |
onetom 25-May-2011 [10682x2] | muhhahahaaa :) |
that request takes several seconds to download in the mentioned scenarios.. | |
Andreas 25-May-2011 [10684] | those 60ms are worth it if the connection is slower than ~4MByte/sec |
Dockimbel 25-May-2011 [10685] | It is a waste of server resources. |
Andreas 25-May-2011 [10686] | depends on your usage scenario |
onetom 25-May-2011 [10687x2] | even my amazon micro instance is idle mostly... |
it would be a waste if it would be happening constantly.. | |
Andreas 25-May-2011 [10689] | if you have no significant load but your users are typically accessing over relatively slow lines, it would result in a significant speedup. i'd certainly not enable such a thing per default, obviously. but for some scenarios (like onetom's, probably) this relatively slow on-the-fly compression using CALL would still be worth it. |
onetom 25-May-2011 [10690] | what kind of connections do u guys live on? |
Dockimbel 25-May-2011 [10691] | you mean server connections? |
onetom 25-May-2011 [10692] | no, client connections. |
Dockimbel 25-May-2011 [10693x2] | 6MBit/s |
(from home) | |
onetom 25-May-2011 [10695] | even in the singapore hackerspace we have only 10Mbit, which is far from being 10Mbit to most directions, but at home we just have ~2Mbit -- hardly enough to watch utube realtime, then in thailand phuket 2.5Mbit, but maaany many times im on edge 10-25kB/s or just 4kB/s GPRS |
Kaj 25-May-2011 [10696] | Yep, it's a problem that most software is developed by Westerners |
onetom 25-May-2011 [10697x3] | the small instance im running this shit from is a small ec2 instance. it compresses the mentioned file in 44ms for the 1st time, then ~28ms subsequently. no matter how i look at it, it does worth to support this for the usual text mime types, especially within the 10kB - 10MB size range |
Kaj: whats u usual client downstream speeds in the netherlands? | |
s/ u / your / | |
Kaj 25-May-2011 [10700x2] | I'm on 20+ Mbit/s here |
But I pretend it's way less when programming :-) | |
onetom 25-May-2011 [10702x2] | how do u pretend? in ur mind or actually w some traffic shaper? :) |
is it really that fast usually? | |
Kaj 25-May-2011 [10704] | Well, I don't even pretend. I just design things to be as efficient as possible, and that means it almost never needs the speeds here |
onetom 25-May-2011 [10705] | my macbook could still saturate it though 6 times |
Kaj 25-May-2011 [10706x2] | The nice thing about this connection is that it's (mostly) symmetric. But most people here have downstreams roughly in that order, although there's also 2 and 100 MBit around |
You're right about the speed relations. We run ancient hardware, but what usually matters is network speed | |
onetom 25-May-2011 [10708] | the funny thing is with this angularjs framework is that most of the code are static files... hardly any rsp processing is needed. small json is generated dynamically, but even the frontend text dictionaries are static javascript files, and angularjs is doing the language switch live without any server turn around... |
Kaj 25-May-2011 [10709] | Yeah, because the dynamics are moved to the client |
Dockimbel 25-May-2011 [10710] | FYI, I plan to work this Sunday on: - adding proper log file relocation ability for UNIX platforms - make a draft mod for testing static file compression support |
onetom 25-May-2011 [10711] | awesome! |
Kaj 25-May-2011 [10712] | Cool |
Endo 26-May-2011 [10713] | That's nice! |
onetom 28-May-2011 [10714] | studying v8 + node + express + connect looks like a great architecture... i wish it was rebol :) |
Kaj 28-May-2011 [10715] | How does it integrate with AngularJS? |
onetom 28-May-2011 [10716] | i would just use the router and the bodyParser middle wares from it, so it does the json parsing back n forth and the restful url parsing |
onetom 29-May-2011 [10717] | https://github.com/joyent/node/wiki/modules#compression these are the kind of compression solutions for nodejs they also have a plain gzip command line utility based solution too |
Kaj 29-May-2011 [10718] | The zlib binding is written in C++, with templates |
onetom 29-May-2011 [10719x2] | Dockimbel: could you work on the log file location / compressoin stuff? |
imean were u able to work? | |
Dockimbel 29-May-2011 [10721] | not yet |
onetom 29-May-2011 [10722x2] | as im browsing nodejs, i see so many features implemented which is missing from cheyenne, but i have already wanted to use, which makes me seriously consider switching to node. cheyenne has the right foundations, but i feel it requires too much studying of the internals to extend it in a practical way. i think it's better if u focus on red. that's something which not many ppl can and want to do, but would be able to affect the world big time. |
for example i was mapping a company directory service under each companies own subdomains, just to allow them to contribute to this shared directory of companies. but as i see there is a cross origin resource sharing module for nodejs' connect framework, which can take care of sending the access-control-allow-origin headers | |
older newer | first last |