Mailing List Archive: 49091 messages
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

[BUG] in mysql-protocol 0.9.9

 [1/4] from: rebol-list2::seznam::cz at: 29-Jul-2003 12:28


Hello doc, just want to tell you that you are using variable 'byte in the global context in your mysql-protocol.r version 0.9.9! FIX: change line in "data reading" from: b0: b1: b2: b3: int: int24: long: string: field: len: none to: b0: b1: b2: b3: int: int24: byte: long: string: field: len: none PS: I'm now experimenting with running the protocol as an asyn.port so I would like to now what's the best way how to deal with the data. IS it better to process only one row result in time when port's awake or is it better to process for example 10 or more rows at this time? And if there is any support in the handler how to detect if the port is still busy (so I should not send any other commands to the db port since all data from previous sql command are processed)? -- Best regards, Oldes mailto:[oliva--david--seznam--cz]

 [2/4] from: dockimbel:free at: 29-Jul-2003 14:58


Hi rebOldes, Selon rebOldes <[rebol-list2--seznam--cz]>:
> Hello doc, > just want to tell you that you are using variable 'byte in the global
<<quoted lines omitted: 4>>
> to: > b0: b1: b2: b3: int: int24: byte: long: string: field: len: none
I would hardly call that a bug and I don't think it deserves a [BUG] banner, but anyway, thanks for the report. I guess you had a hard time finding the guilty source code. ;-)
> PS: > I'm now experimenting with running the protocol as an asyn.port so
Good idea !
> I would like to now what's the best way how to deal with the data. > IS it better to process only one row result in time when port's > awake or is it better to process for example 10 or more rows at
It depends on the level of granularity you want to achieve. If you want to multiplex several I/O tasks you'll have better results with small processing in the awake handler. The MySQL driver engine was designed to read and process logical data packets (my pgsql driver works on physical (tcp) packets with greater performance), so you have to let it, at least, decode a complete row. I would do only one row by awake events. The best way would be te recode the driver to make it async by design.
> this time? And if there is any support in the handler how to detect > if the port is still busy (so I should not send any other commands > to the db port since all data from previous sql command are > processed)?
Yes, port/local/stream-end?. If 'false, then the driver is expecting data to come. You can cancel a pending command by reading at low-level all incoming data and trashing it. See 'flush-pending-data function. HTH, -DocKimbel

 [3/4] from: rebol-list2:seznam:cz at: 4-Aug-2003 20:42


Hello dockimbel, Tuesday, July 29, 2003, 2:58:23 PM, you wrote: dff> Hi rebOldes, dff> Selon rebOldes <[rebol-list2--seznam--cz]>:
>> Hello doc, >> just want to tell you that you are using variable 'byte in the global
<<quoted lines omitted: 7>>
>> b0: b1: b2: b3: int: int24: byte: long: string: field: len: none >>
dff> I would hardly call that a bug and I don't think it deserves a [BUG] banner, dff> but anyway, thanks for the report. I guess you had a hard time finding the dff> guilty source code. ;-) I've found it by accident, because as you can see I'm trying to use your protocol in async mode and variable 'byte' is not so rare to let it be as a global one. Maybe it would be good to have some utility which would be able to show how many new global variables some rebol code creates:)
>> PS: >> I'm now experimenting with running the protocol as an asyn.port so
dff> Good idea !
>> I would like to now what's the best way how to deal with the data. >> IS it better to process only one row result in time when port's >> awake or is it better to process for example 10 or more rows at
dff> It depends on the level of granularity you want to achieve. If you want to dff> multiplex several I/O tasks you'll have better results with small processing in dff> the awake handler. The MySQL driver engine was designed to read and process dff> logical data packets (my pgsql driver works on physical (tcp) packets with dff> greater performance), so you have to let it, at least, decode a complete row. I dff> would do only one row by awake events. The best way would be te recode the dff> driver to make it async by design. It's too difficult to do the re-coding by myself. I have quite a good results with copy/part 10 rows of results per 'awake' if there is about hounders of results and it seems to be ok (all these result are send as a response to more then one user.
>> this time? And if there is any support in the handler how to detect >> if the port is still busy (so I should not send any other commands >> to the db port since all data from previous sql command are >> processed)?
dff> Yes, port/local/stream-end?. If 'false, then the driver is expecting data to dff> come. You can cancel a pending command by reading at low-level all incoming dff> data and trashing it. See 'flush-pending-data function. Yes, that's whot I found by myself but that's ok with SELECTs but I would like to ask you how safe it would be with the INSERT or UPDATE sql commands which seems not to produce any responce (I mean if I would send these commands too quickly). I use something like that in my project where I'm using your protocol in 'async' mode and it's working without problems: db: open mysql://root:[root--127--0--0--1]/oldes mysql-queries-buffer: make block! 10 ;["SQL-query" target-port :handler] mysql-handler: func[port /local records][ probe records: copy/part port 10 mysql-next-query? ] networking/add-port db :mysql-handler mysql-make-query: func[query [string!] port handler][ ;print ["make-query:" mold query] either db/locals/stream-end? [ db/user-data: port networking/change-handler db :handler insert db query ][ insert tail mysql-queries-buffer reduce [query port :handler] ] ] mysql-next-query?: func[][ if all [db/locals/stream-end? not empty? mysql-queries-buffer][ db/user-data: mysql-queries-buffer/2 networking/change-handler db third mysql-queries-buffer insert db mysql-queries-buffer/1 remove/part mysql-queries-buffer 3 ] ] ;------- then I use somethink like that to make a query: mysql-make-query rejoin [ "INSERT INTO mapa_krizovatky_brno VALUES (NULL," m/x "," m/y ");" ] none none mysql-make-query rejoin [ "SELECT * FROM mapa_krizovatky_brno WHERE id=LAST_INSERT_ID();" ] port func[port /local row][ probe row: first first port ;there is no need to copy more then one row insert row "cr" insert db/user-data join (rejoin-with row #"|") #"^@" row: none mysql-next-query? ] I'm using only one mysql connection so it should be safe, but anyway is there some way how to get LAST_INSERT_ID directly? How is working for example working mysql_insert_id() function in PHP? Is it using some internal 'select' call to MySQL or is this value available from some response? -- Best regards, rebOldes mailto:[oliva--david--seznam--cz]

 [4/4] from: maximo:meteorstudios at: 4-Aug-2003 15:44


hi oldes,
> Maybe it would be good to have some utility which would be able to > show how many new global variables some rebol code creates:)
this might help !!!!! ;-------------------code start----------------------- rebol [] myfunc: function [hello][there][ gblvar: "ggogo" there: 'kkk print hello ] gbl-list: query/clear system/words probe gbl-list myfunc "print test" gbl-list: query system/words probe gbl-list ;-------------- ; if I understand what is happening, here, gbl-list ; should only have gbl values which have ; chaged since last query/clear ;-------------- foreach item gbl-list [ probe to-word item probe value? item ] ask "done..." ;-------------------code end-----------------------

Notes
  • Quoted lines have been omitted from some messages.
    View the message alone to see the lines that have been omitted