Home » Java » Endianness in programming languages

Endianness in programming languages

Posted by: admin December 28, 2021 Leave a comment


Well, the “Endianness” theme was always a little bit confusing to me, but i have never faced any problems which required me to even think about the default behaviour of binary writers/readers that i used. I am writing a PNG decoder in c# right now. PNG file format specification states that all numbers are stored in a big endian notation (which i find very natural). However, i was very surprised when i noticed, that .NET’s BinaryReader/Writer works with a little endian notation. What confused me even more, was the fact, that java’s binary IO works with a big endian notation (a am not a java programmer, so maybe i am wrong). So i started to think about the following questions:

1 – Why are things as they are? I mean a Base Class Library default behaviour.
2 – Why there is no way to choose a preferred notation when using .NET’s System.IO ?

I am currently using Jon Skeet’s MiscUtil and it works like a charm (thanks, man =) ). But it would be cool to see this functionality in a Base Class Library.


This is because the code is meant to run as well as possible on the most important platform. C#/.NET is from Microsoft and runs mostly on x86 platforms. x86 is little-endian, so it makes sense to make the library little-endian. Java is made by Sun, and the Sun SPARC was big-endian, thus the Java standard was big-endian instead.


The BCL contains things in the System.BitConverter static class that allow you to deal with system endianness. All methods in BitConverter are essentially platform agnostic as a result.

In addition, the System.Net.IPAddress.NetworkToHostOrder method allows you to change endianness from big to little endian, and vice versa.


I guess it boils down to always being able to deal with both, regardless of the platform you’re on. Preon is trying to hide some of that complexity by allowing you to declaratively (using annotations) define the mapping between your in-memory data representation, and the encoded representation.

So if this is part of your data structure:

public Image {
    int width;
    int height;

then defining the mapping to a natural big endian representation would be as easy as this:

public Image {
    @BoundNumber int width;
    @BoundNumber int height;

However, if the representation is little endian, then you can do this:

public Image {
    @BoundNumber(byteOrder=LittleEndian) int width;
    @BoundNumber(byteOrder=LittleEndian) int height;

In both cases, creating a Codec for this data structure is the same:

Codec<Image> codec = Codecs.create(Image.class);

I know some people were talking about porting this to .NET as well.