最新文章专题视频专题问答1问答10问答100问答1000问答2000关键字专题1关键字专题50关键字专题500关键字专题1500TAG最新视频文章推荐1 推荐3 推荐5 推荐7 推荐9 推荐11 推荐13 推荐15 推荐17 推荐19 推荐21 推荐23 推荐25 推荐27 推荐29 推荐31 推荐33 推荐35 推荐37视频文章20视频文章30视频文章40视频文章50视频文章60 视频文章70视频文章80视频文章90视频文章100视频文章120视频文章140 视频2关键字专题关键字专题tag2tag3文章专题文章专题2文章索引1文章索引2文章索引3文章索引4文章索引5123456789101112131415文章专题3
当前位置: 首页 - 科技 - 知识百科 - 正文

MySQLdumps_MySQL

来源:动视网 责编:小采 时间:2020-11-09 19:15:49
文档

MySQLdumps_MySQL

MySQLdumps_MySQL:As part of theHTTP Archiveproject, I create MySQL dumps for each crawl (on the 1st and 15th of each month). You can access the list of dumps from thedownloads page. Several people use these dumps, most notablyIlya Grigorik who imports the d
推荐度:
导读MySQLdumps_MySQL:As part of theHTTP Archiveproject, I create MySQL dumps for each crawl (on the 1st and 15th of each month). You can access the list of dumps from thedownloads page. Several people use these dumps, most notablyIlya Grigorik who imports the d


As part of theHTTP Archiveproject, I create MySQL dumps for each crawl (on the 1st and 15th of each month). You can access the list of dumps from thedownloads page. Several people use these dumps, most notablyIlya Grigorik who imports the data into Google BigQuery.

For the last year I’ve hesitated on many feature requests because they require schema changes. I wasn’t sure how changing the schema would affect the use of the dump files that preceded the change. This blog post summarizes my findings.

Format

When I started the HTTP Archive all the dumps were exported in MySQL format using a command like the following:

mysqldump --opt --skip-add-drop-table -u USERNAME -p -h SERVER DBNAME TABLENAME | gzip > TABLENAME.gz

These MySQL formatted dump files are imported like this:

gunzip -c TABLENAME.gz | mysql -u USERNAME -p -h SERVER DBNAME

People using databases other than MySQL requested that I also export in CSV format. The output of this export command is two files: TABLENAME.txt and TABLENAME.sql. The .txt file is CSV formatted and can be gzipped with a separate command.

mysqldump --opt --complete-insert --skip-add-drop-table -u USERNAME -p -h SERVER -T DIR DBNAME TABLENAMEgzip -c DIR/TABLENAME.txt > DIR/TABLENAME.csv.gz

This CSV dump is imported like this:

gunzip DIR/TABLENAME.csv.gzmysqlimport --local --fields-optionally-enclosed-by="/"" --fields-terminated-by=, --user=USERNAME -p DBNAME DIR/TABLENAME.csv

The largest HTTP Archive dump file is ~25G unzipped and ~3G gzipped. This highlights a disadvantage of using CSV formatted dumps: there’s no way to gzip and ungzip in memory. This is because themysqlimport command uses the filename to determine which table to use – if you piped in the rows then it wouldn’t know the table name. Unzipping a 25G file can be a challenge if disk space is limited.

On the other hand, the CSV import is ~30% faster than using the MySQL format file. This can save over an hour when importing 30 million rows. The HTTP Archive currently provides dumps in both MySQL and CVS format so people can choose between less disk space or faster imports.

Forward Compatibility

My primary concern is with the flexibility of previously-generated dump files in light of later schema changes – namely adding and dropping columns.

Dump files in MySQL format work fine with added columns. The INSERT commands in the dump are tied to specific column names, so the new columns are simply ignored. CSV formatted dumps are less flexible. The values in a row are stuffed into the table’s columns in order. If a new column is added at the end, everything works fine. But if a column is added in the middle of the existing columns, the row values will all shift one column to the left.

Neither format works well with dropped columns. MySQL formatted files will fail with an “unknown column” error. CSV formatted files will work but all the columns will be shifted, this time to the right.

Takeaways

I now feel comfortable making schema changes without invalidating the existing dump files provided I follow these guidelines:

  • don’t drop columns– If a column is no longer needed, I’ll leave it in place and modify the column definition to be a small size.
  • add columns at the end– I prefer to organize my columns semantically, but all new columns from this point forward will be added at the end.
  • I’ll continue to create dumps in MySQL and CSV format. These guidelines ensure that all past and future dump files will work against the latest schema.

    文档

    MySQLdumps_MySQL

    MySQLdumps_MySQL:As part of theHTTP Archiveproject, I create MySQL dumps for each crawl (on the 1st and 15th of each month). You can access the list of dumps from thedownloads page. Several people use these dumps, most notablyIlya Grigorik who imports the d
    推荐度:
    • 热门焦点

    最新推荐

    猜你喜欢

    热门推荐

    专题
    Top