I have a large data set in SPSS (v20) with null values for some observations.
I tried saving as an excel 2007 file, but when I open excel file “#NULL!” appears in cells where values are null. I’m going out of ram when trying to use ‘find and replace’ function in Excel.
T tried saving as a csv file then I got a space in the cells where values are null.
Could anybody advice on this please?
According to the command syntax reference, #NULL!
values occur only for system missing values. So to prevent that you need to assign the system missings a value – for that you can use the RECODE
command (e.g. RECODE MyVar (SYSMIS = -9)(ELSE = COPY).
would work for a number field in which -9 can not be a valid value).
Depending on what you want the value to be when written to the sheet, you can then use the /CELLS=VALUES
subcommand on SAVE TRANSLATE
to save the assigned missing numeric category (IMO a bad idea for spreadsheets) or you can assign the missing value a VALUE LABELS
and use /CELLS=LABELS
to save the string label in the cell.
Answer:
I typically save as CSV and then in excel save as .xlsx. All missing values are then, as you noted, allocated a space which I accept as representing sysmis values.
When I work with a file that has been saved directly to Excel, (ie many “#NULL” values), I use a VBA macros which does a find/replace row by row. The macro is quicker than doing it all at once as this typically starts to slow to infuriating speeds. The macro is still not as fast as one would want…which is why I go via CSV.
Tags: excelexcel