现在使用的是一个对 MyISAM 表的 SELECT COUNT(*)
获取总数 total,然后除以每页数量 y 之后得到总页数,对于请求的页面 x 计算出到第 x 页的文章数,然后构建子查询(伪代码):
select * from ? where id>(select id from ? order by id limit x*y,1) limit y
上面算是一个伪代码,大概就是这个思路,不知道有没有优雅一点的分页的方式?想要自己把这个弄清楚。如果有好的思路/轮子,万分感谢!
1
mcfog 2015-03-01 09:45:31 +08:00 via Android
没看懂这里的子查询有啥好处…直接一个limit不就好了?
|
2
laoyuan 2015-03-01 10:14:34 +08:00 1
limit (x - 1) * y, y
|
3
liuhaotian OP |
4
coolwind 2015-03-01 11:14:22 +08:00
用一个limit吧
select * from ? where id in (select id from ? order by id limit (x-1)*y,y) |
5
liuhaotian OP |
6
coolwind 2015-03-01 11:36:06 +08:00 2
数据量大的话 select * from ? where id in (select id from ? order by id limit (x-1)*y,y) 效率高
小的话,直接limit要高点 |
7
frankzeng 2015-03-01 11:46:04 +08:00
jquery datatable可以前端进行分页,就是第一次进的时候比较慢一点。
|
8
coolwind 2015-03-01 11:51:40 +08:00
数据越大,取越后面的记录,差异越大(mysql 5.1不支持上面的带limit的子查询,可以改为innder join)
select count(*) from piwigo_history; +----------+ | count(*) | +----------+ | 6783731 | +----------+ 1 row in set (0.00 sec) mysql> select a.* from piwigo_history as a inner join (select id from piwigo_history order by id limit 6783400,10) as b on a.id=b.id order by a.id; 10 rows in set (1.24 sec) mysql> select a.* from piwigo_history as a inner join (select id from piwigo_history order by id limit 6783410,10) as b on a.id=b.id order by a.id; 10 rows in set (1.32 sec) mysql> select a.* from piwigo_history as a inner join (select id from piwigo_history order by id limit 6783420,10) as b on a.id=b.id order by a.id; 10 rows in set (1.34 sec) 直接读取 mysql> select * from piwigo_history order by id limit 6783400,10; 10 rows in set (12.76 sec) mysql> select * from piwigo_history order by id limit 6783410,10; 10 rows in set (14.33 sec) mysql> select * from piwigo_history order by id limit 6783420,10; 10 rows in set (13.13 sec) |
9
coolwind 2015-03-01 11:57:15 +08:00
您这种方式,速度也不错
mysql> select * from piwigo_history where id>=(select id from piwigo_history order by id limit 6783400,1) order by id limit 10; 10 rows in set (1.25 sec) mysql> select * from piwigo_history where id>=(select id from piwigo_history order by id limit 6783410,1) order by id limit 10; 10 rows in set (1.33 sec) mysql> select * from piwigo_history where id>=(select id from piwigo_history order by id limit 6783420,1) order by id limit 10; 10 rows in set (1.37 sec) |
10
liuhaotian OP @coolwind 实际上我感觉我自己用的跟你的基本是类似的。直接limit的话跟我们是不一样的。
时间上差了1个数量级,是因为直接limit会全表? |
11
tabris17 2015-03-01 12:26:52 +08:00
万一不按id排序岂不是傻眼了
|
12
laoyuan 2015-03-01 13:26:06 +08:00
所以还是用in好
|
13
cevincheung 2015-03-01 13:44:35 +08:00
换PGSQL大法保平安
|
14
coolwind 2015-03-01 13:54:20 +08:00
没有用到索引,确实是全表扫描,然后用文件排序(这种速度很慢)
mysql> explain select * from piwigo_history order by id limit 100000,10; +----+-------------+----------------+------+---------------+------+---------+------+---------+----------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+------+---------------+------+---------+------+---------+----------------+ | 1 | SIMPLE | piwigo_history | ALL | NULL | NULL | NULL | NULL | 6784937 | Using filesort | +----+-------------+----------------+------+---------------+------+---------+------+---------+----------------+ 1 row in set (0.00 sec) mysql> select @@version; +------------+ | @@version | +------------+ | 5.1.73-log | +------------+ 1 row in set (0.00 sec) 当然,如果您不需要排序的话,直接limit速度还是不错 mysql> select * from piwigo_history limit 5000000,10; 10 rows in set (1.50 sec) mysql> select * from piwigo_history limit 6000000,10; 10 rows in set (1.84 sec) |
15
liuhaotian OP @coolwind 我试试看啊谢谢 确实没有必要排序
|
16
otakustay 2015-03-01 21:02:58 +08:00
数据量大的时候,in会比较快,同时还要看avg row size,avg row size越大,用limit查询越慢,如果有几个text字段的话会慢到不忍直视
比较快的方法是另建一张表,只存id,先从id表里取到id,再用in去找完整的数据 id表也可以转为Mongo等更适合此类场景的存储,甚至在系统启动时丢进内存(做好同步) |