[REBOL] Horribly Busy, but Woohoo! a Web Server Object :)
From: rex::smallandmighty::com at: 25-Aug-2000 15:03
I'm in the process of moving to a new apartment this weekend, so I don't
have the time to properly upload this to the script libraries, so I'm
just appending it this email. Hopefully nothing will get goofed up... At
least it's somewhat commented. :)
This is a generic self-contained HTTP server object. It handles multiple
connections and has rudimentary cookie support. It currently handles GET,
PUT, POST, and HEAD requests.
To make good use of this object, you should at least override the get-
handler method and possibly the post-handler, put-handler, and head-
handler methods. It all depends on what you want to do.
Fortunately, you should only have to override these four methods. And
odds are you'll only need to override the get-handler and post-handler
methods, since HEAD and PUT requests are pretty rare. You'll probably
want to change the listen port from the default of 80 if you plan on
running other servers.
As presented, It does *not* serve files from disk or run CGI scripts. The
default handlers merely display the headers and data that the client sent
over. (Good for debugging, and an interesting learning experience...) The
purpose of this object was to facillitate the creation of special purpose
servers that use the HTTP protocol.
All of the normal functionality one expects in a traditional web server
can be easily layered on top.
XML-RPCers should love this, and once I'm done with the move, I'll join
in the effort to get a Rebol XML-RPC server up and going. It should be a
night's work with this object to get a rudimentary server up and running.
All of the HTTP stuff is handled, you can just concentrate on the meat,
i.e. parsing the request XML, handling the method invocation, and
generating the response XML.
I haven't done any substantial benchmarking, but under Mac OS 8.6 and
using the experimental Rebol 2.4 PPC build, this server seems to be able
to send data at ~150 - 200k/s. (It depends on where and how you're
generating the data.)
To give this thing a try, do something like this:
s: make http-server []
s/run
Thanks to everyone on the list who have answered questions that have
indirectly helped me with this. Enjoy!
:Eric
;Start of code
Rebol
[
Title: "Rebol HTTP Server Object"
Date: 25-Aug-2000
Author: "Eric King"
]
;
; Random notes:
; If the debug attribute is set to true (which it is by default)
; Attempting to read (GET) from /shutdown will cause the web server
; to clean up and shutdown.
;
; As presented here, this web server does *not* serve pages from disk.
; Why? Because, I wanted a very general and flexible HTTP protocol server.
; That is, I did not necessarily want URIs to map directly to files.
; That said, getting this to serve pages, images, sounds, etc is
; really easy. In your get-handler method, convert the Target-URI:
; header to a path and put the resulting file in the entity field of your
; handler-response object. Also set the code, cookie, hdrs, and mime fields
; to something appropriate.
; e.g. response/code: "200" response/mime: "text/plain" response/cookie: ""
; response/hdrs: copy []
;
; response/entity: open %target-file.txt
; or
; response/entity: read %target-file.txt (This seems faster under MacOS 9)
;
http-server: make object!
[
;
; HTML Templates
;
default-template: copy
{<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<title>Rebol HTTP Server Object</title>
<style type="text/css">
<!--
a:link ^{ text-decoration: none; color: #6D4D4D ^}
a:visited ^{ text-decoration: none; color: #4D4D6D ^}
text {font-family: Arial, Sans-serif; font-size: 12px;}
-->
</style>
<meta name="generator" content="Reblog 3.0">
<body bgcolor = "#FFFFFF">
<div class = "text">
<!--content-->
</div>
</body>
</html>}
;
; Attributes
;
port: 80
connections: 0
listener: none
;Queues to hold the state of multiple connections
read-q: copy []
write-q: copy []
;Create a 4k buffer for headers
header-buffer: make string! 4096
;Send/Receive data in 8k chunks
read-chunk-size: 8192
write-chunk-size: 8192
;Limit size of PUT or POSTed data
max-entity: 32768
debug: true
;Copy this object and set the appropriate fields in the
;get-handler, put-handler, post-handler, and head-handler methods.
handler-response: make object!
[
code: copy "200"
mime: copy "text/html"
cookie: copy "value=nothing"
entity: copy ""
hdrs: copy []
]
;
; Utility Methods
;
cmt-insert: function
[str [string!] tag [string!] con [string!] /dup /every]
[ s t c d]
[
s: rejoin [ copy "<!--" copy tag "-->" ]
c: copy con
if dup [ c: append c reform [ "^/" s] ]
either every
[ t: replace/all copy str s c]
[ t: replace copy str s c]
return t
]
;
; Initialization Methods
;
init: function []
[ ]
[
;Open the http port
listener: open/lines join tcp://: port
;I think I had grander initialization plans than this.
]
;
; Server Response utilities
;
;
; Handler Methods
;
;
; Handlers should always return the HTTP Response code
; The mime type of the entity being sent
; A block of auxillary headers in the form of [ "Header: " "Value" ]
; The actual entity being sent. It can be a binary!, string!, or an open
; port!
;
; As a convenience, there is a handler-response object which encapsulates
; those four things. Its fields are:
; code [string!] mime [string!] cookie [string!]
; entity [binary! port!] hdrs [block!]
;
; Note: Linefeeds for the auxheaders will automatically be added in
; the send-response method.
;
; Override these four methods in a derived object to make this server
; do what you want. e.g.
; my-server: make httpserver reduce
; [ get-handler: my-get-handler post-handler: my-post-handler port: 8000 ]
;
; The read and write queues are set up such that the handlers never have
; to worry about streaming data to or from the client.
;
; e.g. During a GET, the get-handler merely has to do something useful
; with the headers passed to it and generate an appropriate entity
; the connection is then placed in the write-q and the entity is sent
; to client in write-chunk-size sized bits.
;
; During a PUT or POST, the connection and data being sent from
; the client is placed in the read-q and retrieved in read-chunk-size
; sized chunks. The PUT or POST handler is then called, and it
; should take that data and the headers passed to it and generate an
; appropriate entity. Like the GET handler, the connection and the new
; entity are then placed in the write-q where they are streamed back
; to the client in write-chunk-size sized chunks.
;
; I have not figured out what should be done during a HEAD request.
; Probably nothing.
;
get-handler: function [ hdrs ]
[ t response s h v headerstring ]
[
;
; By default format and return the headers passed to the server
; from the browser
;
headerstring: copy "Request Headers: <br>^/"
foreach [ h v ] hdrs
[ append headerstring rejoin ["<li>" h " " v "<br>^/"]]
t: copy default-template
t: cmt-insert t "content" rejoin
[
"Default HTTP Get Handler<br><br>^/"
form now "<br><br>^/"
headerstring "<br><br>^/"
]
response: make handler-response
[
code: "200" mime: "text/html"
cookie: "" entity: t
hdrs: copy [ "Rebol-HTTP-Object-Server:" "1.0a" ]
]
return response
]
head-handler: function [ hdrs ]
[ response ]
[
response: make handler-response
[
code: "200" mime: "text/html"
cookie: "" entity: ""
hdrs: copy [ "Rebol-HTTP-Object-Server:" "1.0a" ]
]
return response
]
put-handler: function [ hdrs data ]
[ t response ]
[
;
; By default format and return the headers and data passed to the
; server from the browser
;
headerstring: copy "Request Headers: <br>^/"
foreach [ h v ] hdrs
[ append headerstring rejoin ["<li>" h " " v "<br>^/"]]
t: copy default-template
t: cmt-insert t "content" rejoin
[
"Default HTTP Put Handler<br><br>^/"
form now "<br><br>^/"
headerstring "<br><br>^/"
"Data: <br>^/" to-string data
]
response: make handler-response
[
code: "200" mime: "text/html"
cookie: "" entity: t
hdrs: copy [ "Rebol-HTTP-Object-Server:" "1.0a" ]
]
return response
]
post-handler: function [ hdrs data ]
[ t response ]
[
;
; By default format and return the headers and data passed to the
; server from the browser
;
headerstring: copy "Request Headers: <br>^/"
foreach [ h v ] hdrs
[ append headerstring rejoin ["<li>" h " " v "<br>^/"]]
t: copy default-template
t: cmt-insert t "content" rejoin
[
"Default HTTP POST Handler<br><br>^/"
form now "<br><br>^/"
headerstring "<br><br>^/"
"Data: <br>^/" to-string data
]
response: make handler-response
[
code: "200" mime: "text/html"
cookie: "" entity: t
hdrs: copy ["Rebol-HTTP-Object-Server:" "1.0a" ]
]
return response
]
;
; Request and Response servicing methods
;
fetch-headers: function [ conn ]
[ line hdrs ip-add hdrsblk rawhdrs h p ]
[
;Suck each line of headers from the connection
;And place it in a buffer
while [all [((line: pick conn 1) <> "") (line <> none)]]
[ append header-buffer join line "^/" ]
;Break the header-buffer up into something Selectable
;There are probably a bunch of much more elegant ways to do this...
hdrs: copy []
rawhdrs: parse header-buffer none
append hdrs reduce
[
"Method:" first rawhdrs
"Target-URI:" second rawhdrs
"HTTP-version:" third rawhdrs
]
hdrsblk: parse/all header-buffer "^/"
foreach h hdrsblk
[
p: find h ": "
if not none? p
[
append hdrs join first parse/all h ":" ":"
append hdrs copy next next p
]
]
;Tack on the IP Address of the connecting machine
;Probably a useful thing to keep around.
;Note: the IP address is reversed in Rebol 2.2 for MacOS
ip-add: conn/host
append hdrs reduce ["IP-Address:" form ip-add ]
;Clear the buffer for the next connection
clear head header-buffer
return hdrs
]
send-response: function [ conn response ]
[ header aux-headers h v ]
[
;Assemble a HTTP Response header
;Tucking in any auxillary headers
aux-headers: copy ""
foreach [ h v ] response/hdrs
[ append aux-headers rejoin [h " " v "^/" ] ]
;But Do not send a set-cookie header if the cookie is empty
if response/cookie <> ""
[ response/cookie: rejoin [ "Set-cookie: " response/cookie "^/" ] ]
header: rejoin
[
"HTTP/1.0 " response/code "^/"
"Content-type: " response/mime "^/"
response/cookie
aux-headers
"Content-length: " length? response/entity "^/^/"
]
;Write it out to the connection
write-io conn header length? header
]
process-write-q: function []
[ new-q qdata conn headers entity chunk ]
[
new-q: copy []
chunk: make binary! write-chunk-size
foreach qdata write-q
[
set [ conn headers entity ] qdata
chunk: copy/part entity write-chunk-size
entity: skip entity write-chunk-size
write-io conn chunk length? chunk
either tail? entity
[
;If we have reached the end of the entity close the
;connection.
close conn
;if the entity is a file reference, close the file
if error? try [close entity][]
;
; Just for debugging.
;
if all [((select headers "Target-URI:") = "/shutdown") (debug)]
[ shutdown ]
]
[
;If we have not reached the end of entity
;put the connection, headers, and entity in the new
;queue
append/only new-q reduce [ conn headers entity ]
]
]
write-q: new-q
]
dispatch-connection: function [ q-data ]
[ conn entity headers response ]
[
set [ conn headers entity ] q-data
;Just a safeguard in case one of the handlers screws the pooch.
if not error? try
[
;Dispatch according to the HTTP method
switch select headers "Method:"
[
"GET" [ response: get-handler headers ]
"HEAD" [ response: head-handler headers ]
"PUT" [ response: put-handler headers entity ]
"POST" [ response: post-handler headers entity ]
]
]
[
;Send the response header
send-response conn response
;Put the entity body of the response in the write queue
;To be sent out in bite-size chunks.
append/only write-q reduce [ conn headers response/entity ]
]
]
process-read-q: function []
[
new-q qdata conn headers entity chunk
bytes-received bytes-to-get content-length
]
[
new-q: copy []
chunk: make binary! read-chunk-size
foreach qdata read-q
[
set [ conn headers entity ] qdata
content-length: to-integer select headers "Content-length:"
;Clamp the size of the received data
if content-length > max-entity [ content-length: max-entity ]
;Calculate how much data to suck from the port
bytes-received: length? entity
bytes-to-get: minimum
(content-length - bytes-received) (read-chunk-size)
;Suck the data from the port
read-io conn chunk bytes-to-get
;Append it to the existing data
append entity chunk
;See if we need to suck more data, if so, put everything
;back in the read-q. If not, dispatch that puppy.
either bytes-to-get >= read-chunk-size
[ append/only new-q reduce [ conn headers entity ] ]
[ dispatch-connection reduce [ conn headers entity ] ]
]
read-q: new-q
]
handle-new-connection: function []
[ headers conn qdata entity entity-size ]
[
conn: first listener
connections: connections + 1
headers: fetch-headers conn
entity-size: select headers "Content-length:"
either none? entity-size
[
;If there is no content, it is probably a get or head request
;Therefore, the entity body is empty.
entity: copy ""
;Let the dispatcher figure out what the method is.
dispatch-connection reduce [ conn headers entity ]
]
[
;Clamp the size of the entity buffer
entity-size: maximum (to-integer entity-size) max-entity
entity: make binary! entity-size
;Put the connection port, headers, and entity buffer in the read-q
qdata: reduce [ conn headers entity ]
append/only read-q qdata
]
]
;
; Main Loop
;
run: function [] []
[
init
forever
[
;If the queues are empty just block and wait on the listener port
either all [(zero? length? write-q) (zero? length? read-q)]
[
wait listener
handle-new-connection
]
;Otherwise handle any new connections
;And then process the queues.
[
either none? (wait reduce [ listener 0.001 ])
[ process-read-q process-write-q ]
[ handle-new-connection ]
]
]
]
;
; Cleanup Routine. Close up any open ports or files
;
shutdown: function [] [q-data]
[
if error? try [close listener] [ ]
foreach qdata read-q
[ if error? try [ close first qdata close third qdata ] [] ]
foreach qdata write-q
[ if error? try [ close first qdata close third qdata ] [] ]
halt
]
]
; Example
; s: make http-server []
; s/run
;End of code