r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[Web] Everything web development related

Pekr
10-Feb-2006
[1060x3]
Are continuations base for tasking/threading?
Do whatever you want with Orca, it is just that when I mentioned 
continuations few years ago on ml, Carl got on steroids and posted 
something like being-there, done-that, don't try to teach me how 
should I design language :-)
If they are not really much usefull, I am not sure I am the one who 
calls for "language purity" just because of the language purity itself 
...
Geomol
10-Feb-2006
[1063]
Hm, to me continuations reminds of GOTOs, which can't be good!
Joe
10-Feb-2006
[1064x2]
I am not asking for native continuations but a way to emulate them 
in web applications.
Geomol, the real advantage of continuations is for handing web forms 
and to ensure the users get a consistent experience. Check the paper 
Jaime points out
JaimeVargas
10-Feb-2006
[1066]
Joe I gues you could emulate continuations. But it is not an easy 
task. I have done some work on this direction by creatinga Monadic 
extension. But it is not yet complete.
Joe
10-Feb-2006
[1067x8]
The problem I trying to solve is strictly for web programming, e.g. 
ensuring there are no inconsistencies in a shopping cart, etc ...
The approach I have is that  every session has a cookie and disk 
storage associated to the cookie. When I define a web form, the action 
method gets a continuation id as a cgi parameter, so if at that point 
you clone the browser window, you as a user have to continuation 
ids
correction: two continuation ids
This  approach is not very scalable, it's just a start waiting for 
better ideas and input
When the user posts a form ,  the form cgi stores the continuation 
id and a rebol block with name-value pairs
If you post the second form also (something you would do e.g. when 
checking flights in a reservation engine, as Jaime's reference paper 
suggests) a second continuation id and rebol block would be stored 
for the same session
So basically the continuations are ensured by using both the cookie 
and associated storage and the continuation id that is added to the 
links as a cgi get parameter
I'll stop now so that I get more input from others. I imagine many 
of the gurus here have done something like this as this is the thorny 
issue with web apps
Sunanda
10-Feb-2006
[1075]
What you are doing Joe is what we old-timers call pseudoconversational 
processing.

Usually, you can kick much of the complexity upstairs if you have 
a TP monitor supervising the show. Sadly, most web apps don;t (a 
webserver doesn't quite count).

People have been doing this sort of thing for decades in languages 
without continuations support; so, though it's a nice-to-have feature, 
it is not a show-stopper.
[unknown: 9]
10-Feb-2006
[1076]
Joe you are asking a question that finds its answer in a completely 
different model.  It reminds of the joke "What I meant to say, was, 
Mother, would you please pass the salt,' (look it up).


The answer is to throw away the brochure (page) model of the web, 
and move to web 2.0, where there is a cohesive (continuous) model.


The UI is complete separated from the backend, and the UI is a single 
entity, that is persistent during the session.  Everything else is 
simply a pain.


Most sites are horizontal (shallow) as opposed to vertical (deep). 
 And most are still modeling on the brochure (page) as opposed to 
the space (like a desktop).
Oldes
13-Feb-2006
[1077]
I'm administrating some pages where is a lot of text articles published. 
And because 50% of the trafic is done by robots as Google crawler, 
I'm thinking about that I could give the content of the page in Rebol 
format (block). Robot will get the text for indexing and I will lower 
the data amount which is transfered with each robots request, because 
I don't need to generate designs and some webparts, which are not 
important for the robot. What do you think, should I include Rebol 
header?
Sunanda
13-Feb-2006
[1078]
That's a form of cloaking. Google does not like cloaking, even "white 
hat" cloaking of the sort you are suggesting:
http://www.google.com/support/bin/answer.py?answer=745


Better to respond to Google's if-modified-since header -- it may 
reduce total bandwith by a great deal:
http://www.google.com/webmasters/guidelines.html


Also consider supplying a Google Sitemap -- and that can have modification 
dates embedded in it too. It may reduce googlebot's visits to older 
pages
http://www.google.com/webmasters/sitemaps/login
Oldes
13-Feb-2006
[1079]
But it's not just google who is crawling, at this moment I recognize 
11 crawlers who check my sites regularly.
Sunanda
13-Feb-2006
[1080]
Some of them are just bad -- ban them with a robots.txt


Some (like MSNbot) will respond to the (non-standard) crawl-delay 
in robots.txt: that at least keeps them coming at a reasonable speed.


Some are just evil and you need to ban their IP address by other 
means...Like flood control or .htaccess

REBOLorg has a fairly useful robots.txt
http://www.rebol.org/robots.txt
Oldes
13-Feb-2006
[1081]
So you think I should not use different (not so rich) version of 
the page to robots.
Sunanda
13-Feb-2006
[1082]
Yoy could try that as a first step:

-- Create a robots.txt to ban the *unwelcome* bots who visit you 
regularly .

-- Many bots have a URL for help, and that'll tell you if they honour 
crawl-delay....If so, you can get some of the bots you like to pace 
their visits better.

If that doesn't work: you have to play tough with them.
Oldes
13-Feb-2006
[1083]
I don't need to ban them:) I would prefere to play with them:) Never 
mind, I will probably make the Rebol formated output anyway. If I 
have RSS output why not to have REBOL output as well. Maybe it could 
be used in the furure, when Rebol will be able to display rich text.
Sunanda
13-Feb-2006
[1084]
Chceck if you can turn HTTP compression on with your webserver. It 
saves bandwidth with visitors who are served the compressed version.
Oldes
13-Feb-2006
[1085]
The bandwidth is not such a problem now:) I was just thinking if 
it could be used somehow to make Rebol more visible.
Sunanda
13-Feb-2006
[1086]
Having REBOL formatted output is / can be a good idea: REBOL.org 
will supply its RSS that way if you ask it nicely:

http://www.rebol.org/cgi-bin/cgiwrap/rebol/rss-get-feed.r?format=rebol

But *automatically* supplying a different version to a bot than that 
you would show to a human is called cloaking and the search engines 
don't like it at all.

If they spot what you are doing, they may ban you from their indexes 
completely.
Oldes
13-Feb-2006
[1087x3]
Do you specify content-type if you produce the output? It doesn't 
look goot if you open it in browser, I should look better than XML 
for newbies.
(... good ... IT should :)
I hope I'm looking better than XML :)))
Sunanda
13-Feb-2006
[1090]
Yes.

If you clicked the link I gave above, then you saw a page served 
as text/html  [probably should be textplain -- so I've changed it]
If you try format=rss then you get a page served as text/xml


In both cases, the output is not meant for humans: one format is 
for REBOL and one for RSS readers.
Oldes
13-Feb-2006
[1091]
yes, now it's ok:)
Sunanda
13-Feb-2006
[1092]
Good to know, thanks.
Sometimes changes like that break in other browsers.
Oldes
13-Feb-2006
[1093x2]
I know it's now for human readed, but for example this chat is public 
and if someone would click on the link, now it looks much more better. 
Don't forget, that Rebol should be human friendly:)
(I should not write in such a dark:) now = not  readed = readers 
:)
Sunanda
13-Feb-2006
[1095]
As I said, the RSS feed is explicitly intended to feed data to other 
programs for formatting, so it doesn't (perhaps can't) look nice.

All the info is available in human friendly ways elsewhere on the 
site, eg:

script library changes: http://www.rebol.org/cgi-bin/cgiwrap/rebol/script-index.r
Oldes
13-Feb-2006
[1096]
yes, no problem, and the issue with the bots - if the bot don't support 
cookies (non of them does), i can give him whatever I want, I'm not 
cheeting, I just may think, that it's somethink like LYNX and serve 
him pure text pages:) And if he don't like it so it's his problem 
(or its?) And with the robot.txt file - ugly bots will not respect 
robot.txt file anyway :)
Sunanda
13-Feb-2006
[1097]
Some bots (the more evil ones) have an even more evil human at ther 
side....Those bots can handle cookies, and will also use the human 
to step them through any logon procedures.
So, technically, yes: bots can use cookies.


But on the cloaking issue: if you show the *same* content to any 
visitor that does not use cookies then that is *not*cloaking, even 
if you serve different content to those that do. So no problem there.
Anton
14-Feb-2006
[1098x2]
How does one find out where the latest "official" syntax for URI's 
is ? For example, I'm looking at
http://www.w3.org/Addressing/rfc1808.txt
This seems more up to date:
http://www.gbiv.com/protocols/uri/rfc/rfc3986.html
JaimeVargas
14-Feb-2006
[1100]
Kudos to Yahoo!, who today released two pieces of goodness into the 
commons. The first is their UI library, and the second is their Design 
Patterns Library. The UI Library is a collection of DHTML/Ajax/Javascript 
(pick your favourite term) controls and widgets. The Design Patterns 
Library is "intended to provide Web designers prescriptive guidance 
to help solve common design problems on the Web". 

- http://developer.yahoo.net/yui/
- http://developer.yahoo.net/ypatterns/
Anton
15-Feb-2006
[1101]
read/custom - can it send more than one cookie at a time ?
Oldes
15-Feb-2006
[1102x3]
of course it can
I use this script: do http://box.lebeda.ws/~hmm/rebol/cookies-daemon_latest.r
to handle cookies
just run it and than I can do read pages and cookies are processed 
automatically
Anton
15-Feb-2006
[1105]
Ok, thankyou, I will try that.
Anton
23-Feb-2006
[1106]
Thankyou Oldes, it seems to be working.
Thør
2-Apr-2006
[1107]
.
Pekr
4-Apr-2006
[1108]
Hi ... I have following task to acomplish ..... my friend who is 
doing college in archeology, is working on her thesis. Part of the 
thesis are images of various ancient goods. So we've got photos from 
our digital camera. I will produce small View script, which will 
allow her to enter comments for each image. Now I want to do a template 
(table?), with various layouts, mainly two images per A4 page plus 
comments under each of those images. Can I influence table cell size? 
My past experience was, that the cell got resized according to image. 
Are there also various methods, how to "stretch" the image in cell? 
thanks a lot for pointers ...
Sunanda
4-Apr-2006
[1109]
You mean HTML tables?

The cell has a height and width, and the image has a height and width.

You probably need to set both height and width on both the cell and 
the image.
Probably easiest with CSS 
Remember to set the padding and margin to zero.

And remember that IE6 and lower handles this differently to other 
browsers, so it's not easy to get pixel-perfect borders and so on.