Home » Mysql » Importing large sql file to MySql via command line

Importing large sql file to MySql via command line

Posted by: admin November 1, 2017 Leave a comment


I’m trying to import an sql file of around 300MB to MySql via command line in Ubuntu. I used

source /var/www/myfile.sql;

Right now it’s displaying a seemingly infinite rows of:

Query OK, 1 row affected (0.03 sec)

However it’s been running a little while now. I’ve not imported a file this large before so I just want to know whether this is normal, if the process stalls or has some errors, will this show up in command line or will this process go on indefinitely?



You can import .sql file using the standard input like this:

mysql -u <user> -p<password> <dbname> < file.sql

Note: There shouldn’t space between <-p> and <password>

Reference: http://dev.mysql.com/doc/refman/5.0/en/mysql-batch-commands.html

Note for suggested edits: This answer was slightly changed by suggested edits to use inline password parameter. I can recommend it for scripts but you should be aware that when you write password directly in the parameter (-p<password>) it may be catched by a shell history revealing your password to anyone who can read the history file. Whereas -p asks you to input password by standard input.


Guys regarding time taken for importing huge files most importantly it takes more time is because default setting of mysql is “autocommit = true”, you must set that off before importing your file and then check how import works like a gem…

First open MySQL:

mysql -u root -p

Then, You just need to do following :

mysql>use your_db

mysql>SET autocommit=0 ; source the_sql_file.sql ; COMMIT ;


+1 to @MartinNuc, you can run the mysql client in batch mode and then you won’t see the long stream of “OK” lines.

The amount of time it takes to import a given SQL file depends on a lot of things. Not only the size of the file, but the type of statements in it, how powerful your server server is, and how many other things are running at the same time.

@MartinNuc says he can load 4GB of SQL in 4-5 minutes, but I have run 0.5 GB SQL files and had it take 45 minutes on a smaller server.

We can’t really guess how long it will take to run your SQL script on your server.

Re your comment,

@MartinNuc is correct you can choose to make the mysql client print every statement. Or you could open a second session and run mysql> SHOW PROCESSLIST to see what’s running. But you probably are more interested in a “percentage done” figure or an estimate for how long it will take to complete the remaining statements.

Sorry, there is no such feature. The mysql client doesn’t know how long it will take to run later statements, or even how many there are. So it can’t give a meaningful estimate for how much time it will take to complete.


The solution I use for large sql restore is a mysqldumpsplitter script. I split my sql.gz into individual tables. then load up something like mysql workbench and process it as a restore to the desired schema.

Here is the script

And this works for larger sql restores, my average on one site I work with is a 2.5gb sql.gz file, 20GB uncompressed, and ~100Gb once restored fully