AltME groups: search
Help · search scripts · search articles · search mailing listresults summary
world | hits |
r4wp | 4382 |
r3wp | 44224 |
total: | 48606 |
results window for this page: [start: 41201 end: 41300]
world-name: r3wp
Group: !REBOL3 ... [web-public] | ||
Ladislav: 20-Apr-2011 | aha, fine, that is perfectly possible, but, it may be caused e.g. by the fact, that you actually are forced to use ANY and ALL for processing conditional values, since AND and OR are not working satisfactorily for that purpose | |
Oldes: 21-Apr-2011 | I prefere ANY and ALL for this reason: >> if (print 1 true) AND (print 2 false) AND (print 3 true) [print 4] 1 2 3 == none >> if all [(print 1 true) (print 2 false) (print 3 true)][print 4] 1 2 == none | |
Geomol: 21-Apr-2011 | It confused me a bit, what exactly was meant by "conditional AND and OR". So it's not just, that they operate on logic values, but that the second operand is only evaluated, if needed. | |
Geomol: 21-Apr-2011 | I presume, it'll be hard to implement conditional AND and OR in REBOL, because it's not just a type check as with other operators. | |
Ladislav: 21-Apr-2011 | it's not just, that they operate on logic values, but that the second operand is only evaluated, if needed - actually, you missed the explanation. "Conditional" was coined by Brian and was meant to represent functions able to yield values usable as CONDITION arguments of IF etc. as well as being able to combine conditional expressions into more complex conditional expressions (which is not true for AND and OR as demonstrated) | |
Geomol: 21-Apr-2011 | Aha, then we're back to my original understanding, and then I don't understand, why you say REBOL does not have conditional AND and OR REBOL does to some extend: >> true and true == true Isn't that a conditional AND from your (or Brian's definition)? | |
onetom: 21-Apr-2011 | true and 1 none or none | |
onetom: 21-Apr-2011 | each operand could be a parameter for IF/EITHER/UNLESS and has it's very well defined logic! value | |
Geomol: 21-Apr-2011 | Yes, I understand, but wouldn't it possible confuse Carl to say "REBOL does not have conditional AND and OR". It does to some extend, right? | |
Geomol: 21-Apr-2011 | I guess, there are several problems in this. Some conditions are dealt with by AND and OR, the logic! ones. But not all conditions, IF/EITHER/UNLESS can handle is supported by AND and OR. And second, AND evaluate all operands, even if some are false. That is the definition of "conditional AND and OR" I found from a search: http://download.oracle.com/javase/tutorial/java/nutsandbolts/op2.html See "The Conditional Operators" a bit down and the mention of "short-circuiting" behavior. | |
Geomol: 21-Apr-2011 | Not having the "short-circuiting" behaviour is a direct bug, as I see it. Because this code with create an error: if (port <> none) and (data: read port) [ ... ] But I have a gut feeling, that logic AND isn't very REBOLish, and that's why we more often use ALL and ANY. It's just a feeling, so feel free to disagree. | |
onetom: 21-Apr-2011 | the original problem was not AND/OR but NOT | |
onetom: 21-Apr-2011 | or actually it is but then AND/OR why not | |
Maxim: 21-Apr-2011 | the basic problem is that NOT isn't symmetric with AND/OR/XOR . currenty, AND/OR/XOR are bitwise ops. they are not language control flow ops. | |
Maxim: 21-Apr-2011 | it raises the question, well what are these ops... and the answer is that there are none. its a strange hole in the language spec which has passed under the radar for a really long time. | |
Maxim: 21-Apr-2011 | its very possible that REBOLers think in a different way and using ANY/ALL is more natural, hence the unatention this has had. | |
Geomol: 21-Apr-2011 | I don't agree, they're only used bitwise, as this example illustrate: >> a: 1 b: 2 == 2 >> if (a = 1) and (b = 2) [print "it's true"] it's true | |
Ladislav: 21-Apr-2011 | Geomol: I don't understand, why you say REBOL does not have conditional AND and OR Excerpt form the above definition of "conditional operator": "...being able to combine conditional expressions into more complex conditional expressions..." The demonstration that AND and OR are not able to combine conditional expressions into more complex conditional expressions is easy | |
Ladislav: 21-Apr-2011 | (and has been done already) | |
Geomol: 21-Apr-2011 | Is it correct to say, that AND and OR can be used as bitwise operators and to check on logic! values. And that e.g. IF can do more than this and then isn't really compatible with AND and OR? | |
Maxim: 21-Apr-2011 | you just got the differentiation between conditional and logical comparisons. | |
Ladislav: 21-Apr-2011 | I am trying to use a slightly different formulation: IF can check not just logical expressions (yielding LOGIC! values), but conditional expressions (yielding any values). We do not have operators combining conditional expressions into more complex conditional expressions (ANY and ALL are dialects, not operators, although they can be used successfully). | |
onetom: 21-Apr-2011 | Ladislav: very clear and concise description. i would emphasize the operator by saying "..don't have any op!s combining.." | |
Geomol: 21-Apr-2011 | I like analog more and more. Buying analog synths, trying to build analog circuits making sound. | |
Geomol: 21-Apr-2011 | onetom, I guess, you can see all functions taking a block as an argument to be dealing with dialects. The content of the block is just words until the function start to interpret it and give meaning to it. | |
Geomol: 21-Apr-2011 | Or maybe more correctly, the content of a block is datatypes (words, numbers, etc), and then the function start to make sense of it. | |
onetom: 21-Apr-2011 | yeah, but in this case the default rebol evaluator is the 1st and almost only thing which touches that block, that's why i overlooked this | |
Geomol: 21-Apr-2011 | The fact, that ANY and ALL are natives, is maybe just to make them faster. It should be possible to create them as functions. One project of mine is to figure out, what minimum set of natives is needed for a REBOL like language, and the rest can then be implemented as functions. | |
BrianH: 21-Apr-2011 | It is definitely possible to implement ANY and ALL as functions if you use DO/next (either the R3 or R2 version). | |
BrianH: 21-Apr-2011 | You use the OP function to make op! values, and 'op gets unset after the mezzanines finish loading because of security/stability (I think OP doesn't have a lot of safety checks yet). TO-OP isn't defined. You can't currently make an op like Geomol's FROM because you can only currently make ops redirect to native! or action! functions - you can't make the necessary wrapper function. This is all afaik, based on conversations with Carl; I haven't tested this very much yet. | |
Maxim: 21-Apr-2011 | fire up R3 and type: >> help to-op USAGE: TO-OP value DESCRIPTION: Converts to op! value. TO-OP is a function value. ARGUMENTS: value | |
BrianH: 21-Apr-2011 | There are some real limits to op! evaluation that makes them unsafe for general use for now. For one thing, the type of the first argument is not checked at call time relative to the typespec by the evaluator - it has to be checked by the function itself internally. This is not a problem with most existing ops because they map to actions (which have their type compatibility handled by the action dispatcher), or in some cases natives that perform their own internal type checking. I believe this was a speeed tradeoff. If you allowed full user creation of ops even with their current restriction to actions and natives, you could bypass the typespecs of natives and corrupt R3's memory. Adding in that typespec check to the op! evaluator would slow down op! evaluation a little, but would be necessary if you really wanted to make user-defined ops possible. | |
BrianH: 21-Apr-2011 | That would fail for = and all other functions that allow the any-type! value (for unset! and error! support), and make it difficult to understand the help of all op functions, which use the typespec of the first argument for documentation purposes. Plus, the argument list for the op! is taken directly from the function it is derived from - it can't and shouldn't be able to be specified separately. | |
Maxim: 21-Apr-2011 | brian we are talking an api issue. what happens before application starts is irrelevant. on init, let R3 do whatever it wants. once we start running the script, have an api-minded function which just makes sure that any function you send to the core used as an op is safe. I don't care for any limits... just document them and I'll live with. whatever the core has which I can't have... who cares. as far as error reporting goes, that is the reason for the stub. IT will have to either handle the error appropriately or just raise an error. really, there is no technical reason for this not being done. its just a question of doing it. limited user ops are still infinitally better than none. | |
BrianH: 21-Apr-2011 | The other trick that would need to be accounted for is that the actual process of calling functions is different for every function type, and afaict the differences are implemented in the evaluator itself rather than in some hidden action! of the function's datatype implementation. This is why I was glad to figure out that the command! type was sufficient to implement what we needed user-defined function types for, because it appears that user-defined function types are impossible in R3, even potentially. The op! redirector needs to be able to understand how to call the function types it supports. R2 ops only understood how to call actions (technically, DO did the redirection in R2, not the op! code itself). R3 ops can also redirect to natives, which is why some functions are native! now that were action! in R2. In order to support making ops from user-defined functions, the op! redirector code would need to be expanded to support calling those function types. | |
BrianH: 21-Apr-2011 | I am not disagreeing with the need for and value of user-defined ops, Maxim, just saying what needs to be done to make them possible. | |
BrianH: 21-Apr-2011 | It would be theoretically possible to make unary postfix ops in REBOL, as long as you used a different datatype or some flag in the op! value which could be set at op! creation time - the evaluation model of REBOL could allow such a thing. Ternary ops would be trickier: You would have to have the second word be a get-word parameter of the word! type be the underlying function's third parameter, and the third parameter of the op would be the fourth parameter of the function. All ternary ops starting with the same word would need to be implemented by the same function, which would behave accordingly based on which word is passed as its third parameter. The ternary op value itself would be assigned to the first word, because REBOL doesn't have multi-word bindings. | |
BrianH: 21-Apr-2011 | A similar method could be used to implement unary postfix and ternary ops in Red, though the tricks would be in the compiler instead of the evaluator. | |
BrianH: 21-Apr-2011 | Strangely enough, if ops were implemented using the R2 method - DO swaps the op keyword for its prefix equivalent, instead if the op itself redirecting - then unary postfix and ternary ops would be possible right now with the current op! type, no new internal flags needed. Prefix functions can have infix keywords already, as long as they are not optional - the arity of the function needs to stay the same, but there's nothing illegal about infix keyword parameters in REBOL. | |
GrahamC: 24-Apr-2011 | I managed to bring up my wiki again .. and the SOAP stuff is here http://www.compkarori.co.nz:8000/Rebol3/AWS but Amazon is going to require https so not sure how that is going to work. | |
Henrik: 27-Apr-2011 | I find myself often needing to sort in a file system on date, when the file name contains a date, but I have to manually build a new date string, where the month is a zero padded number. Does it not make sense to have a file-system and sort friendly date stamp? | |
Maxim: 27-Apr-2011 | just remove vin and vout functions before running the func. | |
Geomol: 27-Apr-2011 | Regarding lit-words compared to other datatypes: >> w: first ['a/b] == 'a/b >> type? w == lit-path! ; this returns path! in R2 >> type? :w == lit-path! >> w: first ['a] == 'a >> type? w == word! ; Why? >> type? :w == lit-word! There is this double evaluation of words holding lit-words. Why is that? As far I can see, only words holding lit-words and functions (incl. natives ...) have this difference in behaviour, when refering to them as words or get-words. I understand why with functions, but why also with lit-words? | |
onetom: 28-Apr-2011 | >> x: [16#ffffff] == [#6#ffffff] how can i specify an integer! in hex format? debase/base "ffffff" 16 returns a binary! which i mostly can smear on my hair, since most operators just doesn't play nicely w it... same problem again... i tried to use rebol for byte level processing and it's just not suitable for it.. :/ | |
Maxim: 28-Apr-2011 | the issues is sort of a syntax sugar, the binary string is the actual value in ram. so you can do things like: a: #{0f0f0f0f} b: 3520188881 >> a and b == #{01010101} but you can't with issues: >> b: #d1d1d1d1 == #d1d1d1d1 >> a and b ** Script error: and does not allow issue! for its value2 argument | |
Ladislav: 30-Apr-2011 | And, how many users prefer: a: make object! [b: does ["OK"]] type? do in a 'b ; == function! versus a: make object! [b: does ["OK"]] type? do in a 'b ; == string! | |
Geomol: 30-Apr-2011 | And now the 'a/b: In R2: >> a: make object! [b: does ["OK"]] >> w: first ['a/b] == 'a/b >> do w == "OK" >> do :w == 'a/b In R3: >> a: make object! [b: does ["OK"]] >> w: first ['a/b] == 'a/b >> do w == 'a/b >> do :w == 'a/b | |
Maxim: 30-Apr-2011 | wrt: And, how many users prefer: a: make object! [b: does ["OK"]] type? do in a 'b ; == function! versus a: make object! [b: does ["OK"]] type? do in a 'b ; == string! ============================ the first should be supported via the 'GET word, so I'd say the later is better, otherwise, there is no point with 'GET. basically, this was perfect in R2, why did it change in R3? | |
BrianH: 1-May-2011 | Ladislav, that ticket was related because it explained that lit-words were active values and that the behavior was intentional. This can be changed if we decide differently, but it isn't currently a bug, it's intentional. | |
Ladislav: 1-May-2011 | And no wonder they do. If a user calls the DO function, then it is expectable that functions, etc. get evaluated. | |
BrianH: 1-May-2011 | It's the difference between a: :print and a: 'print. | |
Ladislav: 3-May-2011 | http://issue.cc/r3/1881and http://issue.cc/r3/1882submitted | |
BrianH: 3-May-2011 | In #1881 you are proposing to take what in R3 is currently an active value and render it inactive, which will make it mildly safer to handle - lit-word/lit-path conversion to word/path is a trivial thing. In #1882 you are proposing to make the word! type into an active value, where you would have to treat every word value as carefully as you treat the function it is assigned. Except it's worse, because in R2 it has the effect of doing *blocks* as well, if those blocks are assigned to a word - even DO of an inline word isn't that unsafe. It is really bad. | |
BrianH: 3-May-2011 | I noticed when you did the poll, you used a safe function that you knew the source of. Do the poll again with a function that deletes your hard drive, or even a block of code for some other dialect that will coincidentally do damage when interpreted by the DO dialect (since R2 does this with blocks and parens as well). Or even a function that takes an unknown number of parameters, and put the call in the middle of code that might be affected by evaluation order or get-word hacking. | |
BrianH: 3-May-2011 | Most of you might not remember this, but parens used to be treated as active values in R2. If you had a paren assigned to a word, putting that word inline in a DO dialect block would cause the paren to be executed. I used to use this as a way of having quick thunks (functions that take no parameters) without calling DO explicitly. However, this made it difficult to work with paren values, and was eventually removed for security reasons because it made screening for potentially dangerous values more difficult than a simple ANY-FUNCTION? call. It would be bad to make word! and path! values just as difficult to work with. | |
Ladislav: 3-May-2011 | In #1881 you are proposing to take what in R3 is currently an active value and render it inactive - do I? | |
BrianH: 3-May-2011 | No, I mistokk what you were saying, and corrected myself in the "Oh wait" message. | |
BrianH: 3-May-2011 | #1434, #1881 and #1882 now have clarifying comments. | |
BrianH: 3-May-2011 | Btw, this comment in #1882: "and since you've requested that lit-word! and lit-path! be returned to their R2-style inconsistency" may not be an accurate representation of your proposal (here earlier in conversation). You might be proposing that R3 do a better job at being inconsistent than R2 is doing (as demonstrated in #1434). If so, cool. | |
BrianH: 4-May-2011 | Pretending that security doesn't matter is a worse policy. Here is what would resolve the security issue: - Putting warnings in the docs for DO, in the same section where they talk about the special treatment of functions and blocks. - Make parameters not work, and don't do blocks and parens through word values, same as R2's DO of path values. - Make sute that we don't try to make set-words and set-paths do assignment when you DO them. Treat them like get-words and get-paths. Together, those restrictions would make DO of word and path values no more insecure than DO of block and paren values. For functions, we have APPLY. | |
Maxim: 4-May-2011 | btw, I've been using apply in R2.7.8 and it works really well :-) | |
BrianH: 4-May-2011 | DO of block and paren values is something that we can say is secure enough already, assuming that variables and such are protected and secured, so that is a good set of restrictions to follow for words and paths. Calling functions through inline words is secure enough if you can control the binding and writeablility of those words. DO of function values has the argument problem, but it's known and has built-in workarounds (APPLY, putting function calls in parens), and we already have simple ways to screen for them. | |
Gregg: 4-May-2011 | DO is seductive, because sometimes I want to create (easily) a "dialect environment" and just use DO to evaluate my dialect., safely and securely. Is there a security page in the docs (I don't see one in the R3 docs right now)? If not, that would be good to have. If we have a list of functions and operations you shouldn't use on untrusted data, and what the risks are, that's a good start. | |
Gregg: 4-May-2011 | And, as Brian mentions, having workarounds or being able to screen for exploitable features. | |
BrianH: 4-May-2011 | There isn't much of a security page right now, though it would be a good idea to make one if only to document the stuff that doesn't currently work (like SECURE in the last 4 versions). I don't know if anyone else has made a concerted effort to attack REBOL and then fix the security problems found. | |
BrianH: 4-May-2011 | I would love it if we as a community were to really think through the (UN)PROTECT model, because the current model is incomplete (even for the stuff that works) and the proposed model is starting to look a bit awkward to use. Keep in mind that PROTECT may also be used to make series sharable among tasks, but that this isn't implemented and there is likely a better way to do this. I would love it if there was a good security model that can integrate well with REBOL semantics. | |
BrianH: 4-May-2011 | (I am trying to write a long *starting* message here and have to put it in the clipboard to answer these questions, sorry.) | |
BrianH: 4-May-2011 | Some factors to consider about the REBOL semantic limitations: - There is no such thing as trusted-vs-untrusted code in a REBOL process, nor can there be, really. Levels of trust need to be on a process boundary. You can't (even hypothetically) do LOAD/secure level or DO/secure level, but you can do LAUNCH/secure level. - If you want to make something readable or writeable to only certain code within a process, binding visibility tricks are the only way to do it. The only way to ensure that your code has access to something and other code doesn't is to make sure that other code can't even see yours. This is why BODY-OF function returns an unbound copy of the body in R3, not the original. - We need a way to make protection stick so you can't unprotect things that are protected, or protect things that need to stay unprotected, but still allow changes to the protection status of other stuff. The currently proposed model does this through a chain of PROTECT and UNPROTECT calls, then a PROTECT/lock, not allowing unlocking if there is a SECURE 'protect. However, the proposed model seems too difficult to use, and as the pre-110 module system demonstrated, people won't use something that is too complex to use, or will use it badly. We need a better way of specifying this stuff. | |
Kaj: 4-May-2011 | Trying to hammer every hole shut with SECURE and PROTECT is the classic method of sticking all your fingers in the dike. When you run out of fingers for all the holes, the water comes gushing in. Capabilities are about making it impossible to get through the next dike. It's a different way of compartmentalising | |
BrianH: 4-May-2011 | Now, for your questions, Kaj. Mezzanines execute arbitrary code with DO. You can't even know if something is code or not until you pass it to a dialect interpreter like DO or PARSE - code is data. Blocks don't have bindings, only their any-word contents do, so the code blocks of functions are not bound to functions, only their contents are. The same goes for functions in modules or objects - they aren't bound to their objects or modules, only referenced by them. (making this up on the fly) It could be possible to make the binding visibility of words be defined as a set of capability tokens as part of the object spec (in the SPEC-OF sense), and have the function spec dialect be extended to contain such tokens. This would have to be checked with every word access, and we would have to be careful to make the model in such a way to avoid unauthorized privilege escalation. Then changes in capabilities would happen on the function call stack, which is task-specific. The problem with this is making sure code can't make functions with more capabilities than the code making them currently possesses. Though R3 doesn't really have a user model, it does have a task model and we could make the capability level task-specific. Code could constrain capabilities for code it calls, but we don't want privilege escalation at function creation time. It would be possible to have privilege escalation at function call time if the function called was created by something with the necessary capabilities. Drawbacks: - If we do this for binding visibility, this means a capabilities check would go into every word access. Word access would be SLOW. - This doesn't add anything to the PROTECT/hide model, which handles binding visibility without the slowdown. Capabilities would be like the SECURE model, but more flexible, so that's something to consider there. What SECURE protects is heavy enough that a capabilities check wouldn't add much to the overhead. | |
BrianH: 4-May-2011 | Remember, R3 currently has three separate security models: SECURE, (UN)PROTECT, and PROTECT/hide. | |
BrianH: 4-May-2011 | Of the 3, SECURE seems like the most likely to be enhanceable with capabilities. Functions could be enhanced by capabilities specs, where the function code could only create other functions of equal to or lesser capabilities than are currently available in the call stack. Once a function is created, it could run code with the capabilities that it was created with (with the exception of that function creation limitation earlier). There could be a function like DO that reduces capabilities and then does a block of code, and maybe MAKE module! could be made to use that function based on capabilities in the module spec. | |
Kaj: 4-May-2011 | It seems to me that you are still talking in terms of plugging all the holes in the myriad of capability that would supposedly be around. This is not how true capabilities work. They implement POLA: there is no capability unless it is needed, and in that case, it needs to be handed down as a token by the assigner of the work. If the boss doesn't have the token, the employee will by definition not be able to do the work | |
Kaj: 4-May-2011 | I don't see why capabilities would need to be checked on every word access. The critical point is the binding, and REBOL uses this well to optimise word access. Capabilities would need to be determined at binding time, so that binding will fail if the required capability token isn't available | |
Kaj: 4-May-2011 | Have you studied the E language, and Genode for that matter? | |
BrianH: 4-May-2011 | OK, let's work this through for only PROTECT/hide to see how the concept would affect things. PROTECT/hide works by making it so you can't make new bindings to a word - that way words that are already bound can be accessed without extra overhead. Adding capabilities to this means that you could create new bindings to the word if you had the token, but not if you didn't. However, with PROTECT/hide (currently) the already bound words don't get unbound when they are hidden, just new bindings to that word, and if you have access to such a prebound word value then you can make new words with that binding using TO, which effectively makes prebound words into their own capability tokens. So PROTECT/hide *as it is now* could be the basis of a capability system. | |
BrianH: 4-May-2011 | The problem that a capability system has of making sure capability tokens don't leak is pretty comparable to the problem with leaking bindings that we already have to take into account with the PROTECT/hide model, so switching to a capability system for that model gains us nothing that we don't have already. And we've already solved many leaking binding problems by doing things like having BODY-OF function returning an unbound copy of its code block rather than the original. The PROTECT/hide model works pretty well for that, so it's just a matter of closing any remaining holes and making sure things are stable. | |
BrianH: 4-May-2011 | No, but all code created after the word is hidden doesn't get access, and only code created before the hiding has access to a token (bound word) that will let it create new code with access. You get the same sharp separation between code with access and code without. | |
BrianH: 4-May-2011 | Basically, the code that creates the token is the only code that has access to the token, and it can pass that token along to other code if it is safe to do so. The only difference is that code isn't protected unless it needs to be. | |
Kaj: 4-May-2011 | Again, did you study true capabilities, especially in the E language, but also in Genode and the ground-breaking KeyKos and EROS systems? If you didn't, I can understand why we don't understand each other. By the way, POLA is not a capabilities term, but a generic security term | |
Kaj: 4-May-2011 | Please study http://erights.organd http://genode.org.Without that, this discussion probably won't go anywhere | |
BrianH: 4-May-2011 | OK, the problem with that model *in this case* (PROTECT/hide) is that we are talking about object fields here, bindings of word values. REBOL objects bind their code blocks before executing them. If there is going to be any blocking of bindings, at least the object's own code needs to be bound first. This means that if you are going to make word bindings hidden, you need to do so after the object itself has been made, or at least after its code block has been bound. You can do this binding with PROTECT/hide, or with some setting in an object header, it doesn't matter. Since words are values and their bindings are static, being able to create a new word with the same binding means that you need access to a word with that binding, with *exactly* the same visibility issues as token access. The difference in this case between POLA and PROTECT/hide is whether object fields are hidden by default or not, not a matter of when they are hidden. | |
BrianH: 4-May-2011 | It's still compiled, and word lookup is handled by the compiler, not at runtime. | |
Ladislav: 4-May-2011 | And what never matters to me is pretence. | |
Ladislav: 4-May-2011 | Make parameters not work, and don't do blocks and parens through word values, same as R2's DO of path values. - this is exactly a complicated way how to pretend something is more secure than it actually is. The only real effect is obtaining a less comfortable and more annoying system | |
Ladislav: 4-May-2011 | ...and I know it, since it was me who proposed it | |
Ladislav: 4-May-2011 | Regarding APPLY- I was the one who implemented the first APPLY in REBOL, and I was the first who used it to obtain a more secure evaluation than using DO | |
BrianH: 5-May-2011 | pretend something is more secure than it actually is - the biggest security concern of R3 is making sure that outside code can't modify the code of functions. There are various tricks that can be done in order to make sure that doesn't happen (putting calls to function values in parens, APPLY, SECURE 'debug). DO of blocks and parens don't need the APPLY or DO in a paren tricks in order to make sure they can't modify the calling code because they can't take parameters, so you can DO blocks and parens without changes to your source code - SECURE 'debug doesn't require changes to source code. This means that less effort is needed to make DO of blocks or parens more secure than DO of functions that can take parameters. The same goes for DO of any non-function type. If you constrain DO of paths or words with functions assigned to them to not be able to take parameters, then they would be exactly as secure as DO of blocks or parens, and can be called with the same code - no additional code wrappers or screening would be needed. This would make DO of words or paths a drop in substitute for DO of blocks or parens. | |
BrianH: 5-May-2011 | Remember, having it evaluate blocks and parens like that would make it not consistent with how words and paths are treated when evaluated inline. | |
Ladislav: 5-May-2011 | currently, and both prefer the lit-word! datatype as the result | |
BrianH: 5-May-2011 | I'm more concerned with people trying to sneak functions into data, which could then use parameters to get access to the original code of another function. This can be used for code injection, or for getting access to bound words that have since been hidden, or to internal contexts. Given that words are often valid data, having a special case where they can execute functions by sneaking in a bound word is a real concern. However, if that function can't take parameters, there is no hacking potential and function code can be secure. The workaround if you don't want any functions executed (beyond the hacking) could be to unbind words retrieved from data, bind them to a known context, or just avoid using DO word or path altogether. | |
BrianH: 5-May-2011 | As for #1434 (and your most recent code example), I would prefer to have lit-words and lit-paths be consistently active values (the way lit-words are in R3 now) for the same reasons you proposed #1881 and #1882. This means having them convert to word and path when they are evaluated instead of just gotten. But if you would prefer them to be a special case like parens (the way lit-paths are in R3 now), that would work for me too as long as that is the case for both lit-words and lit-paths - it would make them a little easier to work with. | |
BrianH: 5-May-2011 | Whether or not there is going to be a difference between inline evaluation of lit-words and evaluation of lit-word values, evaluation of lit-word values needs to be consistent whether you do so by referring to them with an inline word, or through explicit DO. R2's behavior is a bug. | |
Ladislav: 5-May-2011 | Once again that "consistent" word. There is the main difference. I do not think you can call "inconsistency" any difference in the evaluation of the former and the latter expression, since the former expression is about handling lit-words as arguments of the DO function, while the latter is about handling words as inline block values, when they refer to lit-words. | |
BrianH: 5-May-2011 | I've been using the terms inline evaluation for having the value inline in the code, regular evaluation for when the value is referred to through an inline word, and explicit evaluation for when the value is passed to DO directly. If the first is to be different, the latter two need to be consistent with each other, same as with parens. | |
BrianH: 5-May-2011 | OK. There is stuff like functions, and these are called "active values" - they behave consistently for inline, regular or explicit evaluation. There is stuff like parens, which behave one way for inline, and a different way for regular or explicit evaluation - what term would you like to use to refer to this pattern with? Because those are the only two choices that would make sense with lit-words and lit-paths. | |
BrianH: 5-May-2011 | Maxim apparently prefers that lit-words and lit-paths act like parens, and so does Geomol. I would be OK with that model. | |
BrianH: 5-May-2011 | The proposed model is something like this: >> 'a/1 == a/1 ; inline evaluation of lit-path converts to path >> b: quote 'a/1 == 'a/1 >> b == 'a/1 ; regular evaluation of lit-path value does not convert to path >> do :b == 'a/1 ; explicit evaluation of lit-path value does not convert to path So it's not exactly like parens, but it's what Maxim, Geomol and I would prefer. | |
Ladislav: 5-May-2011 | Your terminology: inline evaluation regular evaluation and explicit evaluation is acceptable for me (although I am not sure about the "regular evaluation", isn't there a chance to find a better notion?) | |
BrianH: 5-May-2011 | The #1434 ticket is about making lit-word and lit-path consistent with each other, and about documenting some of the intentional changes from R2. |
41201 / 48606 | 1 | 2 | 3 | 4 | 5 | ... | 411 | 412 | [413] | 414 | 415 | ... | 483 | 484 | 485 | 486 | 487 |