Mailing List Archive: 49091 messages
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

about the benchmarks

 [1/10] from: koopmans::itr::ing::nl at: 10-Oct-2001 16:56


about the benchmarks.... the 450/sec was Rugby. We will add fastcgi or lrwp, but we know we can scale that up easily with some load balancing on the webserver side. Although I know people were hoping that this was a FastCGI thing, it is actually good news, as Rebol is performance wise good. --Maarten

 [2/10] from: petr:krenzelok:trz:cz at: 10-Oct-2001 18:51


----- Original Message ----- From: "Maarten Koopmans" <[koopmans--itr--ing--nl]> To: <[rebol-list--rebol--com]> Sent: Wednesday, October 10, 2001 4:56 PM Subject: [REBOL] about the benchmarks
> about the benchmarks.... > > the 450/sec was Rugby. We will add fastcgi or lrwp, but we know we can
scale
> that up easily with some load balancing on the webserver side.
Ah, probably so, although I am not sure if Apache itself provides it. I was more interested in your Rugby setup. So it is clear, that in FastCGI scenario, user types some url, and your fastCGI script is called. What does such script do - connects to your Rugby server? If so,. then how does webserver load balancing helps you, if your script connect to only one running Rugby server? -pekr-

 [3/10] from: m:koopmans2:chello:nl at: 10-Oct-2001 21:30


First of all, you can have any number of Rugby servers running. In your fastcgi script, you simply do a wait on tge fastcgi port. Upon receiving a request you pass the data to a Rugby server (the actual application). You use /deferred in Rugby to achive this. Now you return listening on the fastcgi port and periodically check if the result has already arrived. Two things can happen: - You get a new fastcgi request. You do the same, but now on the next Rugby server. Very simple load balancing but most of the time sufficient. - You get the result. You write it out on the corresponding fastcgi accepted port (that you saved in a block or so), close the port, and start listening again. So basically you use the fastcgi thing as a load balancer to the rugby servers. As you use non-blocking deferred Rugby calls you have very fast setup times (both on fastcgi and Rugby). This is the picture: ------------ webserver ------------ fastcgi -------------- Rugby Rugby Rugby HTH, Maarten

 [4/10] from: m:koopmans2:chello:nl at: 10-Oct-2001 23:16


The OS helps you here: it accepts a connection even when the application is busy. This called the backlog. It is set to 5 by default. So you can have 5 times the number of Rugby servers in terms of connections before blocking starts. You can change this value in hipe-serv.r It was 15 (commented out for Mac compat). You can comment it in, build-rugby.r and you are done. Higher than 15 most of the time is useless (see the OS docs for details). --maarten

 [5/10] from: petr:krenzelok:trz:cz at: 11-Oct-2001 1:03


----- Original Message ----- From: "Maarten Koopmans" <[m--koopmans2--chello--nl]> To: <[rebol-list--rebol--com]> Sent: Wednesday, October 10, 2001 11:16 PM Subject: [REBOL] Fw: Re: about the benchmarks
> The OS helps you here: it accepts a connection even when the application
is
> busy. This called the backlog. It is set to 5 by default. So you can
have 5
> times the number of Rugby servers in terms of connections before blocking > starts. > > You can change this value in hipe-serv.r It was 15 (commented out for Mac > compat). You can comment it in, build-rugby.r and you are done. Higher
than
> 15 most of the time is useless (see the OS docs for details).
Heh, that's cool. I didn't know what is that item good for in port structure. One question: getting stubs to client get's there func's source code along with regby refinements (/http ...), but what if my function calls another custom one, or even worse, some dll routine? You could get routine transferred to client, but hardly whole library ... just curious ... ... maybe you could add one to three sentences comment to each module to your website. It would help ppl to understand, what is inside imo .... -pekr-

 [6/10] from: g:santilli:tiscalinet:it at: 11-Oct-2001 11:20


Maarten Koopmans wrote:
> Now you return listening on the fastcgi port and periodically check if the > result has already arrived.
Do you need polling the rugby server? Would it be possible to just wait on the tcp port? Just being curious, Gabriele. -- Gabriele Santilli <[giesse--writeme--com]> - Amigan - REBOL programmer Amiga Group Italia sez. L'Aquila -- http://www.amyresource.it/AGI/

 [7/10] from: petr:krenzelok:trz:cz at: 11-Oct-2001 12:04


Gabriele Santilli wrote:
> Maarten Koopmans wrote: > > > Now you return listening on the fastcgi port and periodically check if the > > result has already arrived. > > Do you need polling the rugby server? Would it be possible to just > wait on the tcp port?
I thought about the same. The question is, if you will imagine hundreds of connected users to GoRIM for e.g., how much do memory allocations for persistent connection eat of OS resources. I thought that it would be even better for GoRIM or RIM themselves switch to push - accept clients, store opened ports in block, and insert new message in a loop to each client ... does IRC work that way? Anyway, GoRIM performs nicely now, I am just, as you are, just curious :-) -pekr-

 [8/10] from: koopmans:itr:ing:nl at: 11-Oct-2001 13:58


Hi Gabriele, This is a common misunderstanding that I'll put in the FAQ. Polling is done on the client side! It merely checks if all data has arrived on the client! What Rugby does: - You open a deferred request and get a ticket number. This is a non-blocking , not-buffered port - Whenever you 'poll' using result-ready? the client reads whatever data is available, and checks if the message is complete. If the message is complete, result-available returns false, otherwise true and you get the result by calling get-result. Put this in an event queue, do the ordering of the messages correct and you have Gorimnb ;-) NOTE: you are programming asynchronously which is, well, different. Ask Graham ;-) You can write your own Rugby client that integrates this with a wait on the fastcgi ports, assembles the data, checks to see if it is complete and updates the wait list accordingly. HTH, Maarten

 [9/10] from: petr:krenzelok:trz:cz at: 11-Oct-2001 14:39


I will better repeat, that I am no port guru, but maybe I just understand something wrongly?: Maarten Koopmans wrote:
> Hi Gabriele, > > This is a common misunderstanding that I'll put in the FAQ. > > Polling is done on the client side! It merely checks if all data has arrived > on the client!
.. and that is imo what Gariele means - what does "polling" mean? Set-up of connection, sending data, closing the connection? It causes some tcp overhead imo, as establishing/closing connections each time means more packets on network ...
> What Rugby does: > - You open a deferred request and get a ticket number. This is a non-blocking
<<quoted lines omitted: 4>>
> and you get the result by calling get-result. Put this in an event queue, do > the ordering of the messages correct and you have Gorimnb ;-)
So, you close and establish connection three times. Wouldn't it be possible to keep first connection opened whole the time, and just perform inserts/reads on the port? Thanks, -pekr-

 [10/10] from: koopmans:itr:ing:nl at: 11-Oct-2001 16:27


Pekr, I keep the connectioon open. Result-ready? re-uses it until it is finished. --Maarten

Notes
  • Quoted lines have been omitted from some messages.
    View the message alone to see the lines that have been omitted