Home » Php » How is it possible to test generated php files?

How is it possible to test generated php files?

Posted by: admin July 12, 2020 Leave a comment


I am working on a refactoring tool, which produces php files. Those generated files can contain variables, functions and classes too.

I would like to write unit tests, to test if those files are working as expected, but I have no idea how should I do it.

If I require an incorrect file with if (! @require('my_new_file.php') ) {, I still get a parse error, which I am not able to catch.

I call require inside a function, but the definitions in the required files are still available outside of my function. How could I avoid that? Is it possible to require a file in a scope, so it will not pollute the global namespace?

Even if I call eval(file_get_contents('my_new_file.php')) inside a function, the functions defined in my_new_file.php are available globally.

How to&Answers:

Generate PHPunit tests for the generated files; eg. with PhpUnitGen:

composer install paulthebaud/phpunit-generator

When these are CLI scripts, always use exit(0) and exit(1).


Functions and classes are always defined in the surrounding namespace, nothing will change that because it’s a language feature. You could inject a namespace into the generated files in order to keep them from polluting other namespaces, which would make name collisions less of a problem.

Other than that, you can isolate the tested code from the testing code via processes. One way is to pcntl_fork(), if available, or exec() as mediocre replacement. Another way might be provided by your test framework (I believe PHPUnit has something like that!) so that the global namespace is either cleaned up or protected.


If i understand what you are trying to do, I would use:

$output = exec("php script_to_test.php");

And then check if the output is the desired one. The exec method execute your php script in a separate process and returns the output as a string. From there you should be able to get the conclusions you need for your tests. You can also parse the output and search for notices, warnings, fatal errors, etc…


as others have suggested, sounds like running the file in a separate php process is close to what you want. but you should use proc_open() instead, so you can inspect both stdout and stderr (the other answers here only allows you to inspect STDOUT, which is not very helpful in detecting errors), if an error occurs, the error will most likely be printed in stderr (PHP prints errors to stderr by default, not stdout.). at the end of this post is a custom version of shell_exec which allows you to monitor both stdout and stderr individually, and allows you to write data to stdin if your script needs stdin data, with it you can test individual scripts like

$cmd=implode(" ",array(
    // if your script needs extra arguments, add them here
$stdin=""; // if your script needs stdin data, add it here

after that, anything your script put in stderr is now in the $stderr variable, and anything it printed to stdout is in the $stdout variable, check if it contains what you expected. if it didn’t, your script probably failed somehow, the contents of $stderr/$stdout probably tells you how it failed.

function my_shell_exec(string $cmd, string $stdin=null, string &$stdout=null, string &$stderr=null):int{
    //echo "executing \"{$cmd}\"...";
    // use a tmpfile in case stdout is so large that the pipe gets full before we read it, which would result in a deadlock.
    $descriptorspec = array(
        0 => array("pipe", "rb"),  // stdin is *inherited* by default, so even if $stdin is empty, we should create a stdin pipe just so we can close it.
        1 => $stdout_handle,
        2 => $stderr_handle, 
        throw \RuntimeException("proc_exec failed!");
    if(!is_null($stdin) && strlen($stdin)>0){
    rewind($stdout_handle);// stream_get_contents can seek but it has let me down earlier, https://bugs.php.net/bug.php?id=76268
    //echo "done!\n";
    return $ret;