Home » Php » How to save memory when reading a file in Php?

How to save memory when reading a file in Php?

Posted by: admin July 12, 2020 Leave a comment


I have a 200kb file, what I use in multiple pages, but on each page I need only 1-2 lines of that file so how I can read only these lines what I need if I know the line number?

For example if I need only the 10th line, I don`t want to load in memory all the lines, just the 10th line.

Sorry for my bad english!

How to&Answers:

Unless you know the offset of the line, you will need to read every line up to that point. You can just throw away the old lines (that you don’t want) by looping through the file with something like fgets(). (EDIT: Rather than fgets(), I would suggest @Gordon‘s solution)

Possibly a better solution would be to use a database, as the database engine will do the grunt work of storing the strings and allow you to (very efficiently) get a certain “line” (It wouldn’t be a line but a record with an numeric ID, however it amounts to the same thing) without having to read the records before it.


Try SplFileObject

echo memory_get_usage(), PHP_EOL;        // 333200

$file = new SplFileObject('bible.txt');  // 996kb
$file->seek(5000);                       // jump to line 5000 (zero-based)
echo $file->current(), PHP_EOL;          // output current line 

echo memory_get_usage(), PHP_EOL;        // 342984 vs 3319864 when using file()

For outputting the current line, you can either use current() or just echo $file. I find it clearer to use the method though. You can also use fgets(), but that would get the next line.

Of course, you only need the middle three lines. I’ve added the memory_get_usage calls just to prove this approach does eat almost no memory.


Do the contents of the file change? If it’s static, or relatively static, you can build a list of offsets where you want to read your data. For instance, if the file changes once a year, but you read it hundreds of times a day, then you can pre-compute the offsets of the lines you want and jump to them directly like this:

 $offsets = array();
 while ($line = fread($filehandle)) { .... find line 10 .... }
 $offsets[10] = ftell($filehandle); // store line 10's location
 .... find next line
 $offsets[20] = ftell($filehandle);

and so on. Afterwards, you can trivially jump to that line’s location like this:

 $fh = fopen('file.txt', 'rb');
 fseek($fh, $offsets[20]); // jump to line 20

But this could entirely be overkill. Try benchmarking the operations – compare how long it takes to do an oldfashioned “read 20 lines” versus precompute/jump.


    $lines = array(1, 2, 10);

    $handle = @fopen("/tmp/inputfile.txt", "r");
    if ($handle) {
        $i = 0;
        while (!feof($handle)) { 
            $line = stream_get_line($handle, 1000000, "\n");

            if (in_array($i, $lines)) {
                echo $line;
                            $line = ''; // Don't forget to clean the buffer!

            if ($i > end($lines)) {



Just loop through them without storing, e.g.

$i = 1;
$file = fopen('file.txt', 'r');
while (!feof($file)) {
   $line = fgets($file); // this gets whole line from the file;
   if ($i == 10) {
       break; // break on tenth line
   $i ++;

The above example would keep memory for only the last line it got from the file, so this is the most memory efficient way to do it.


use fgets(). 10 times 🙂 in this case you will not store all 10 lines in the memory


Why are you only trying to load the first ten lines? Do you know that loading all those lines is in fact a problem?

If you haven’t measured, then you don’t know that it’s a problem. Don’t waste your time optimizing for non-problems. Chances are that any performance change you’ll have in not loading the entire 200K file will be imperceptible, unless you know for a fact that loading that file is indeed a bottleneck.