最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 科技 - 知识百科 - 正文

转载:WhydoesMYSQLhigherLIMIToffsetslowthequerydown

来源:懂视网 责编:小采 时间:2020-11-09 07:40:32
文档

转载:WhydoesMYSQLhigherLIMIToffsetslowthequerydown

转载:WhydoesMYSQLhigherLIMIToffsetslowthequerydown:来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down Scenario in short: A table with more than 16 million records [2GB in size]. T
推荐度:
导读转载:WhydoesMYSQLhigherLIMIToffsetslowthequerydown:来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down Scenario in short: A table with more than 16 million records [2GB in size]. T

来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query b

来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down


Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query becomes, when using ORDER BY *primary_key*

So

SELECT * FROM large ORDER BY `id` LIMIT 0, 30 

takes far less than

SELECT * FROM large ORDER BY `id` LIMIT 10000, 30 

That only orders 30 records and same eitherway. So it's not the overhead from ORDER BY.
Now when fetching the latest 30 rows it takes around 180 seconds. How can I optimize that simple query?

It's normal that higher offsets slow the query down, since the query needs to count off the first OFFSET + LIMIT records (and take only LIMIT of them). The higher is this value, the longer the query runs.

The query cannot go right to OFFSET because, first, the records can be of different length, and, second, there can be gaps from deleted records. It needs to check and count each record on its way.

Assuming that id is a PRIMARY KEY of a MyISAM table, you can speed it up by using this trick:

SELECT t.*
FROM (
 SELECT id
 FROM mytable
 ORDER BY
 id
 LIMIT 10000, 30
 ) q
JOIN mytable t
ON t.id = q.id

See this article:

  • MySQL ORDER BY / LIMIT performance: late row lookups
  • MySQL cannot go directly to the 10000th record (or the 80000th byte as your suggesting) because it cannot assume that it's packed/ordered like that (or that it has continuous values in 1 to 10000). Although it might be that way in actuality, MySQL cannot assume that there are no holes/gaps/deleted ids.

    So, as bobs noted, MySQL will have to fetch 10000 rows (or traverse through 10000th entries of the index on id) before finding the 30 to return.

    EDIT : To illustrate my point

    Note that although

    SELECT * FROM large ORDER BY id LIMIT 10000, 30 
    

    would be slow(er),

    SELECT * FROM large WHERE id > 10000 ORDER BY id LIMIT 30 
    

    would be fast(er), and would return the same results provided that there are no missing ids (i.e. gaps).


    参考:

    1.

    为什么长尾数据的翻页技术实现复杂 --文章很好

    http://timyang.net/data/key-list-pagination/

    文档

    转载:WhydoesMYSQLhigherLIMIToffsetslowthequerydown

    转载:WhydoesMYSQLhigherLIMIToffsetslowthequerydown:来自:http://stackoverflow.com/questions/4481388/why-does-mysql-higher-limit-offset-slow-the-query-down Scenario in short: A table with more than 16 million records [2GB in size]. T
    推荐度:
    标签: 转载 the mysql
    • 热门焦点

    最新推荐

    猜你喜欢

    热门推荐

    专题
    Top