r3wp [groups: 83 posts: 189283]
  • Home
  • Script library
  • AltME Archive
  • Mailing list
  • Articles Index
  • Site search
 

World: r3wp

[MySQL]

Will
13-Jan-2009
[1068x4]
http://en.wikipedia.org/wiki/Diff
can you diff..
btw, I suggest not to store images in the db, store it somewhere 
on your hardrive and put in the db a pointer to the file, if you 
really need it and do not find ehat's wrong, ping me in the weekend 
and I can test that here, althougt I use other binary data with mysql-protocol 
with no problem, not sure if it apply but check also encodings for 
mysql storage, etc
ehat -> what
amacleod
13-Jan-2009
[1072]
I got it....
I have to convert it back to binary. ("to-binary") 

I assumed that if I was giving it a binary file it would remain in 
binary just as sqlite treated it....The field attributes state "binary".

Thanks for the help, Will!
Davide
1-Mar-2009
[1073]
Is there a MySQL guru around ? I need to optimize this query:


SELECT A.cod, A.date , SUM(B.amount) AS amount, COUNT( * ) AS numrow 
FROM A INNER JOIN A AS B ON A.cod=B.cod AND B.date <= A.date WHERE 
A.cod IS NOT NULL GROUP BY A.cod, A.date


This return "running sums" (partial sum for every date and every 
cod). 
Both cod and date are indexed.


In a table of about 100'000 records it takes 104 sec to complete 
while in SQL server 2005 (the same query, on the same data, on the 
same index ) it takes 3 sec !

I've tried to use MyIsam, InnoDB, MEMORY storage, and used BOTH btree 
and hash index.
I've tried to FORCE Index  for Join and Group too.

Any suggest ?
Oldes
2-Mar-2009
[1074]
is there any reason why you join A on A? also.. have you tried to 
EXPLAIN the query? http://dev.mysql.com/doc/refman/5.0/en/explain.html
Davide
2-Mar-2009
[1075]
> is there any reason why you join A on A?

If I have these records:

cod 	date		amount
A	2009/03/01	10	
A	2009/03/03	30
A	2009/03/04	20
A	2009/03/07	5
B	2009/03/02	17
B	2009/03/10	5


That query give me the sum of previous amounts for every date/cod: 

cod 	date		amount	NumRow
A	2009/03/01	10		1	
A	2009/03/03	40		2
A	2009/03/04	60		3
A	2009/03/07	65		4
B	2009/03/02	17		1
B	2009/03/10	22		2		

I don't know if there's a better method without  using join

> also.. have you tried to EXPLAIN the query?


Yes, explain returns that  the correct indices are used. Not very 
informative.
Maarten
2-Mar-2009
[1076]
Davide, first: NULL value are evil (as are duplicate rows).My guess 
the cost is the fact that your query probably runs a fulll table 
scan as it needs to sum all of the rows... So table partitioning 
will help a lot. My guess is also that Oracl, SQL Server and perhaps 
PostGreSQL are smarter with their table optimziers and query rewriters. 
HTH
BrianH
2-Mar-2009
[1077]
Also, change the count(*) to count(1), so MySQL doesn't push the 
full contents of the join through the count processor.
Davide
2-Mar-2009
[1078]
> My guess is also that Oracl, SQL Server and perhaps PostGreSQL 
are smarter with their table optimziers and query rewriters


Postgres is about 2 times faster than MySQL in this query, but the 
execution time grow alot as num of records increase. 
So I really don't know how good is compared to MySQL.
Oracle, I would try, but I have no time :-P

> Also, change the count(*) to count(1)

Thanks, good one.


The real tables will be more large (5 M record or more), so small 
optimizations would be not sufficient
I'm tryng a different approach, using one simple stored function:

create function running_total (cod VARCHAR(50), adder DECIMAL)
RETURNS DECIMAL
	BEGIN
		IF @last_cod <> cod THEN
			BEGIN
				SET @running_total = 0;
				SET @last_cod = cod;
				SET @num_row = 0;
			END;
		END IF;
		SET @running_total = @running_total + adder;
		SET @num_row = @num_row + 1;
		RETURN @running_total;
	END

and using as select:

SET @last_cod = ''; 
SET @running_total = 0; 
SET @num_row = 0; 

SELECT cod , date, running_total(cod , amount), @num_row FROM a ORDER 
BY cod, date;


This approach seems really fast : processing and inserting 100'000 
records  took less than 1 sec. instead of  281,73 sec. with SQL join
DideC
30-Mar-2009
[1079]
I have a new mysql server but I can't connect to it with %mysql-protocol 
v1.2.1 :

>> open mysql://user:[traiteur-:-localhost]/testjmg
connecting to: localhost

** User Error: ERROR 1251 : Client does not support authentication 
protocol requested by server; consider upgrading MySQL client
** Near: open mysql://user:[traiteur-:-localhost]/testjmg


I know I have to change something in the mysql server configuration, 
but does anyone can point me to what it is ?
Pekr
30-Mar-2009
[1080]
there are two types of authentication schemes ... IIRC, onwards from 
4.1 or 4.0.1 mysql switched to stronger authentication. So - if you 
upgraded or used older DBs, you have to explicitly set old auth method 
...
DideC
30-Mar-2009
[1081]
It's MySQL - 5.0.51a
Do you know how to do that ?
Dockimbel
30-Mar-2009
[1082x4]
I'm using 5.0.18 and don't have such issues. Anyway to find a fix, 
have a look here : http://dev.mysql.com/doc/refman/5.1/en/old-client.html
Start mysqld with the --old-passwords option. 
 seems the simpliest way to workaround that.
Btw, you should check if your code is not using an older version 
of %mysql-protocol.r (just in case you're reusing old source files).
Just installed 5.0.77 community server on Windows, no problem to 
log in using default password mode with mysql protocol v1.2.1.
DideC
30-Mar-2009
[1086x4]
OK, old-password is off in the server variable. Do you know where 
is the config file where this variables are sets (Ubuntu) ?
Client version is the last one.
(I just look at the variable in phpmyadmin.
Does protocol v1.2.1 works with new password method ? If so, maybe 
I just need to update the user password int he db ?*
Dockimbel
30-Mar-2009
[1090x2]
v1.2.1 is supposed to work with both old and new passwords.
MySQL config file (my.cnf) can be in one of these places :
- /etc/
- $HOME/ 
- MySQL data folder
DideC
1-Apr-2009
[1092]
Ok, it works now. My bad, I put the last protocol in a folder but 
I where still loading the old one from another folder !
amacleod
17-Apr-2009
[1093x3]
Can you create a database remotely?

Anyone know the syntax?

I tried:
insert db [create-db "test2"]

with db=port but does not seem to work
Is there a limit to the number of tables in a database?
Anyone know SOP for this situation:

 I have hundreds of users (not yet really) )and I need to store several 
 tables of user data for each. Should I create a seperate database 
 for each user or is it better to use a naming scheme for the tables 
 and store all of them in one database? or is it just personal preference?
Oldes
17-Apr-2009
[1096x4]
insert db "SELECT * FROM user"
insert db "CREATE DATABASE test2"
forget the first one:)
I don't know details, but I think one database should be enough
amacleod
17-Apr-2009
[1100x2]
Thanks olds...I think that will making everything simpler for me 
too.
Oldes
Oldes
17-Apr-2009
[1102]
I think there is no limit on number of tables. A database in MySQL 
is implemented as a directory containing files that correspond to 
tables in the database.
amacleod
17-Apr-2009
[1103x2]
I still can't create db but never mind as I'm going with tables...Thanks 
again
I got it!
Dockimbel
17-Apr-2009
[1105]
insert db [create-db "test2"] works flawlessly here....You've probably 
connected to the server with a user that didn't had enough rights 
for creating databases, but it's hard to figure out from a "does 
not seem to work" issue description.


I start to better understand Carl's willings for filtered communication 
channels that improve the signal to noise ratio in order to save 
some valuable working time...
BrianH
17-Apr-2009
[1106x2]
I've run into situations where MySQL will fail to do DDL without 
throwing an error, at least not one the mysql:// driver notices. 
Haven't tested with the /Command driver or qcmysql:// though.
The DDL in question is create table statements where some obscure 
semantic rule is violated. By obscure, I mean it took me a day to 
track down the error in the MySQL manuals.
Dockimbel
17-Apr-2009
[1108]
You should try with the official command line mysql client. If you 
notice a different behaviour than the mysql:// driver, please report 
it here.
BrianH
17-Apr-2009
[1109]
Does the /Command client support MySQL 5.1?
Dockimbel
17-Apr-2009
[1110x2]
Don't know, I think it doesn't support anything above 3.x but I can't 
find the full /Command feature list on rebol.com to confirm.
Btw, I've never tested my driver with a 5.1.x server.
Maarten
18-Apr-2009
[1112]
Yeah, but you're driver has the source. Also, in the ancient times 
before COmmand 2.x and your mysql:// I did a library interface, it's 
still on rebol.org
Will
21-Apr-2009
[1113]
I use your driver with 5.1.x no problem 8-)
Dockimbel
21-Apr-2009
[1114]
Good to know! Thanks.
amacleod
29-Apr-2009
[1115]
for large query results is there a way to show a progress bar as 
an indicator of the download progress?

Is there some type of callback method like "read-net"?
Dockimbel
29-Apr-2009
[1116]
No, there's no such callback. But you can easily add one in my driver 
by inserting a call to "read-net" in the FOREVER loop inside READ-ROWS 
function. Can you describe briefly the situation where you need to 
get large query results and display a progress bar (just curious)?
amacleod
29-Apr-2009
[1117]
I'm hosting a large number of text docs broken up by sections in 
a mysql db. Once all the material is on line the changes (and thus 
downloads) wil be smal for the most part. But in the mean time large 
amounts of material are still to be uploaded to the db and when a 
client finds this new material it will download it and any changes 
to previous material. The material contains a lot of image data too 
so it could add up. Right now I have about 40 megs of data and I 
have about 10X more to upload.


If its too tuff for me I will just display some animated gif but 
I would prefer something more tangible..

I'll look into your suggestion, Thanks Doc...


BTW I tried the latest mysql-protocol and it broke my app. I have 
not had a chance to look into it but I think you changed some syntax 
for selects??
I'm using Version: 1.1.2 at the moment.