🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Platform independence layer - platform specific data?

Started by
1 comment, last by Shaarigan 4 years, 4 months ago

Hi,

i started to implement a little platform abstraction layer into my project. At the moment I have a header file (platform.h) which contains all my function declarations / data types for the platform stuff. In the corresponding source files (win32_platform.cpp, linux_platform.cpp …) I defined the functions from platform.h.

Now I dont know how I should store the “platform specific data”. Lets take window creation as an example:

For window creation I have the function PlatformCreateWindow which takes a pointer to a PlatformWindow struct and some other data:

int PlatformCreateWindow(PlatformWindow* wnd, int x, int y, int w, int h, const char* txt);

Because the platform specific data for window creation depends on the implementation I have forward declared a struct with the name “NativeWindow” which gets defined in the platform specific source file.

// Platform.h
struct NativeWindow;
struct PlatformWindow {
	int x;
	int y;
	int w;
	int h;
	const char* txt;
	NativeWindow* nativeWindow;
}

// win32_platform.cpp
struct NativeWindow {
	HWND windowHandle;
}

In PlatformCreateWindow I can now do something like this:

// Implementation in win32_platform.cpp
int PlatformCreateWindow(PlatformWindow* wnd, int x, int y, int w, int h, const char* txt) {
	wnd→nativeWindow = createNativeWindow(); // Register class, create window ….
	// Init other stuff
}

1. Am I on the right track with my approach?

2. Do you have tips / suggestions ?

Thanks!

Advertisement

I was in the same trouble when reworking my basic module of our SDK. Looking around on the internet I found the HAL in Unreal's Source Code on GitHub and some interesting files that are called the “minimal windows api”. The problem I often have faced is that if you are including windows.h, there is a full list of hundrets of headers also included and build-complexity increased. Sure you can via macro define what parts of the header you want but honestly, this feels odd because you have to dig through the header files and have a look at the macro definitions tested for. So I included windows.h into my platform layer only once for using interlocked memory barrier (as this is itself a macro I can't reproduce that easy)

My solution to handle both, a small flat API and dismiss all the windows.h stuff I don't want to include was inspired by Unreal's minified windows API, to define anything I needed from WINAPI to be an extern include. So I added a header file in a special namespace (to not clash with any global stuff)

#pragma once
#if WINDOWS
#ifndef WindowsDataTypes_h
#define WindowsDataTypes_h

namespace Runtime
{
    typedef unsigned char BYTE;
    typedef int32 BOOL;
    typedef unsigned short WORD;
    typedef unsigned long DWORD;
    …
    #pragma warning ( push )
    #pragma warning( disable : 4201)
    union LARGE_INTEGER 
    {
        struct 
        {
            DWORD LowPart;
            LONG  HighPart;
        };
        LONGLONG QuadPart;
    };
    #pragma warning ( pop )
}

Because it is very unlikely that anything of the legacy WINAPI changes in the future (and if it does, I could adapt my code anyways), I copied the definition of the windows types I need into my code like Unreal did.

Then I have for example my Chrono.h file

#pragma once
#if WINDOWS
#ifndef WindowsChrono_h
#define WindowsChrono_h

#define SE_WINAPI stdcall
#define static_link extern ‘C’
#include <Windows/DataTypes.h>

namespace Runtime
{
    static_link dll_import BOOL SE_WINAPI QueryPerformanceCounter(LARGE_INTEGER *lpPerformanceCount);
    static_link dll_import BOOL SE_WINAPI QueryPerformanceFrequency(LARGE_INTEGER *lpFrequency);
}

force_inline int64 SE::System::GetHighResolutionTime()
{
    Runtime::LARGE_INTEGER result;
    Runtime::QueryPerformanceCounter(&amp;result);

    return static_cast<int64>(result.QuadPart);
}
force_inline int64 SE::System::GetClockFrequency()
{
    Runtime::LARGE_INTEGER result;
    Runtime::QueryPerformanceFrequency(&amp;result);

    return static_cast<int64>(result.QuadPart);
}

#undef SE_WINAPI

#endif
#endif

And defined the general API in my System layer like so

#pragma once
#ifndef SystemChrono_h
#define SystemChrono_h

#include <Numerics.h>

namespace SE
{
    namespace System
    {
        /**
         Gets current CPU tick counter value
        */
        int64 GetHighResolutionTime();
        /**
         Gets the ticks per second constant
        */
        int64 GetClockFrequency();
    }
}

#include <Android/Chrono.h>
#include <Linux/Chrono.h>
#include <Windows/Chrono.h>
#endif

What happens now is the following:

  • Preprocessor includes windows specific code on windows platforms
  • Compiler forwards the public declaration in System to the implementation in the platform defined files
  • Linker detects the extern ‘C’ dll_import definitions and tries to bind the function to a known function either in code (which not exists) or in the VC Runtime/ WINAPI

And finally I build my classes using SE::System functions without worrying about how it is implemented. If I need to have different platform specific data types or parameter filled into the function, I try to either have those parameter unified for the system API, have some kind of switch (via an enum) that my internal implementation could handle or define a data type in the System API that will be transpiled into the OS dependent one.

The later did not happen yet and my API is rather completed for whatever I need.

The switch processing happens very often, for example in my file API I have a utility function to translate such a switch into windows specific file access modifers

inline void TranslateFileFlags(DWORD&amp; openMode, DWORD&amp; shareMode, SE::FileAccessFlags::EnumType flags)
    {
        if (flags == SE::FileAccessFlags::Default || HasFlag(flags, SE::FileAccessFlags::Read))
        {
            openMode |= SE_GENERIC_READ;
            if (HasFlag(flags, SE::FileAccessFlags::Shared))
                shareMode |= SE_FILE_SHARE_READ;
        }
        if (HasFlag(flags, SE::FileAccessFlags::Write))
        {
            openMode |= SE_GENERIC_WRITE;
            if (HasFlag(flags, SE::FileAccessFlags::Shared))
                shareMode |= SE_FILE_SHARE_WRITE;
        }
    }

I didn't encounter any huge performance issues yet rather than for accessing stuff by name (what shouldn't happen such often in a game), for example opening a file. This is because windows supports ANSI and Unicode versions of it's WINAPI functions and I use the Unicode ones to be UTF8 compilant. This requires calling MultiByteToWideChar in order to pass UTF8 strings properly

[EDIT]

Oh and I forgot to mention that whenever I have something to be stored in my higher level code, I pass it by void pointer. Most C++ devs around here will now be triggered but you won't work with that pointers by your own rather than the forwarded API calls will and they know what they expect and how to handle it. You are only responsible for storing and passing that pointer into the right parameter

This topic is closed to new replies.

Advertisement