🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

When did game developers start using 3d libraries to develop 2d games?

Started by
37 comments, last by JoeJ 4 months, 1 week ago

@undefined for what it is worth, I can't help but point out that killer instinct and some other games in those days like Donkey Kong country, used pre-rendered 3d graphics as bit maps and tile maps.

Here is the graphics subsystem of the SNES -

  • PPU 1: Renders graphics (tiles) and applies transformations on them (rotation and scaling).
  • PPU 2: Provides effects such as window, mosaic and fades over the rendered graphics

Quoting:

https://www.copetti.org/writings/consoles/super-nintendo/

And wikipedia has a good article under “development” where it is mentioned that both the arcade version and the SNES port use graphics pre-rendered on Silicon Graphics workstations. The arcade version had a hard drive which increased storage for bit maps.

The hard drive was a Seagate ST9150AG 131 Megabytes. Reference: ebay

Advertisement

rei4 said:
Then I wondered what the reasons were for the switch from bit block transfer (bit blitting) to using textures and 2D polygons.

Even Windows made that transition around the same time frame. Windows XP (2002) still relied on 2D accelerated graphics cards to do that work. Graphics drivers could implement the details however they wanted, but most cards still had 2D interfaces. Windows Vista (2007) had a hybrid approach, display drivers for old hardware could use 2D acceleration but there was a parallel path through the user-mode D3D driver where the GDI driver for calls like BitBlt() were routed either way. Windows 7 (2009) dumped the session-space display driver entirely. All the 2D API's you are using in Windows for drawing are using the same D3D graphics drivers as the games use.

The hardware transition point between those same years, when machines shifted from 2D accelerated graphics to 3D accelerated even for the cheapest low-end office machines. By about 2010 you couldn't find them without it, even the cheapest machines had an integrated 3D graphics card with full DirectX support through integrated graphics chips. It wasn't the fastest, but it was universally present.

@undefined Thanks. Discussing the Windows graphical interface helps put things into perspective. For example, I was very surprised to learn that windows XP was using the same tired windowing system as its predecessors. I had my hunch about circa 2010 finality against 2D. Direct X 9.0(a,b,c) supported direct 2D but by Direct X 10 (circa 2008) direct 2d support was eliminated. Considering what you said about the hardware on the market by 2010, it's amazing just how much the transition to 3D was. Not just Microsoft were involved. But the whole industry.

Source:

The Design of a 2D Graphics Accelerator for Embedded Systems

https://www.mdpi.com/2079-9292/10/4/469

Just wanted to share that 2D accelerators are still being used in embedded systems. The resources on some systems is so constrained that there is no expense for 3D acceleration so instead they use 2D acceleration. I have learned that ‘acceleration’ is a broad term that applies to many areas such as cryptography, AI, and signal processing. It boils down to when there is a case where a general purpose CPU is limited in its ability to process data and instructions quickly enough to suit a need. For 2D graphics, acceleration is accomplished with having additional communication paths to a video buffer. I don't know all the details but vertices, lines, filling, rotation, scaling are achieved by optimizing architecture for processing repetitive and complex instructions, adding special registers, and cache. The embedded system mentioned in the article above uses the acceleration for things like HMI or human machine interfaces. Also, GUI for user applications.

At some point I have plans to work with the XGS (x game station). Powered by the Atmel 8 bit MCU (AVR 644P) it appears that in this case the processor itself is so fast that it does not depend on a separate acceleration system. I bought my XGS from ic0nstrux.com. I think there is an xgamestation website as well.

rei4 said:
I had my hunch about circa 2010 finality against 2D. Direct X 9.0(a,b,c) supported direct 2D but by Direct X 10 (circa 2008) direct 2d support was eliminated.

Was it? Maybe you mean DirectDraw specifically? But Direct2D (https://en.wikipedia.org/wiki/Direct2D) is a newer 2D api that, as far as I know and could find, is still supported to this day. Though it seems the uses are more for user-interfaces, vector-graphics etc…

Since I'm on it, I want to highlight one important point that hasn't been focused enough IMHO in this thread: Just because you are using DirectX(3D), doesn't mean that you have to use a full 3D-first-2d-locked-view nonsense that Unity or Unreal are doing (yes I'm calling it nonesense, I'm sure there are reason for why it's there and there's certainly use-cases where mixed modes can make sense, but for pure oldschool 2d games, such a system is pretty abysmal IMHO).

So what I'm saying is, you can still craft an interface that mimics the semantics of pure 2D, while taking advantage of the modern libraries. Which is what I've personally done with my engine, as I'm currently only working on a 2d SNES-style title. I'm pretty sure that modern DirectX beats old BitBlt or whatever easily. I implemented a 2d tilemap using geometry-shader and LUT-textures, which is pretty fast. With this you can have animated tilemaps with adsurd sizes and layer-counts, all on on screen (practically useful ie. in editor-mode), which I'm pretty sure bit-blting would have problems with rendering in an interactive frame-rate.

@undefined Yes. I do stand corrected about Direct 2D. I thought direct 2D and direct draw were the same.

So, sounds like modern 3D APIs and game engines lock you into a 3D perspective? It makes sense whenever you don't need a light source and camera for a 2D game (aka blender default scene).

Is this your own engine that draws 2D using 3D hardware, e.g. the shader you mentioned?

rei4 said:
So, sounds like modern 3D APIs and game engines lock you into a 3D perspective? It makes sense whenever you don't need a light source and camera for a 2D game (aka blender default scene).

Modern, widely used game engines usually only have a dedicated 3d-view, and allow you to lock the view into a 2d-mode (as frob already mentioned). This means that usually the controls of the view are ass (at least Unity is really fucking bad with it's 2d mode), and you still have to deal with the whole API being 3d (positions, rotations, scale all having 3 coordinates, etc…).

rei4 said:
Is this your own engine that draws 2D using 3D hardware, e.g. the shader you mentioned?

Yep. I've been developing my own engine for 12 years now. I started with primarily supporting 3d, using Direct3D, but then ported a game I've been making with Rpg-Maker XP to it. Since then, the engine is primarily 2d, and as mentioned it has a full 2d view/API. It just uses DirectX(11) to achieve the 2d rendering. That's kind of the thing with the 3d-APIs though, you can customize them to whatever you need. Normal sprite-rendering also uses a geometry-shader to extend a point into a quad, using manual calculations on 2d-input positions, instead of 3d positions and matrices. But you can still use the full power of shaders. Have to admit, I never used it before, but I really don't see how the old/ancient 2d rendering would benefit me over what I'm doing here with the 3D API.

@undefined Is there any other engine, like yours, that is capable of bypassing the constraints you mentioned? The ones inherent in Unity's “2d mode” while throwing all the 3D parameters every time you call a function?

Do you just use directX 11 for the API? Any other dependencies?

I understand that options are limited to those trying to use the now obsolete methodology for 2D games. But I am interested in the older hardware. Eventually I might try home brew on the Sega Genesis. For that you do need to be studying the ways of the 8/16 bit game hardware. Also, the XGS is about the only way I know of where you could learn about games for single board computers. I'm seriously considering simple enough SBCs to route my own PCB and stencil, paste, and bake in the components.

Really interested in MiSTer…the FPGA based emulator that can allow one to edit a Sega Genesis core if they knew how to do that. In other words they understood enough about hardware description languages…and VLSI for that matter.

Juliean said: Have to admit, I never used it before, but I really don't see how the old/ancient 2d rendering would benefit me over what I'm doing here with the 3D API.

I understand that options are limited to those trying to use the now obsolete methodology for 2D games. But I am interested in the older hardware. Eventually I might try home brew on the Sega Genesis. For that you do need to be studying the ways of the 8/16 bit game hardware. Also, the XGS is about the only way I know of where you could learn about games for single board computers. I'm seriously considering simple enough SBCs to route my own PCB and stencil, paste, and bake in the components.

Really interested in MiSTer…the FPGA based emulator that can allow one to edit a Sega Genesis core if they knew how to do that. In other words they understood enough about hardware description languages…and VLSI for that matter.

This topic is closed to new replies.

Advertisement