There really isn't anything used instead (it's literally just not using OOP), and I write a lot of performance critical tooling from bulk data processing such as imagery/telemetry to visualising a lot of data in graphics simulations (not game development, but the same skill set, graphics are graphics).May I ask what you use instead of OOP and what kind of software do you write?
Just curious.
Therefore when it comes to performance, cache is everything, and OOP philosophy prohibits that entirely, it's a fact that you cannot pack data to hit those cache lines and layout memory perfectly if you follow OOP philosophy, not a matter of opinion of not liking OOP only. OOP is just not an option for when you need to squeeze performance/hardware to the limit, and that applies to everything that needs it, including games.
Myself and others do not use OOP like people think about or use OOP, structures are not designed the same way, I do not believe or use OOP concepts such as encapsulation, polymorphism, wide use of generics and template metaprogramming, etc or all the crazy stuff the C++ committee thinks about. C++ I work on has absolutely zero template usage for example, which is not at all the state for general C++ projects which is why C++ compile times take so long for many projects because of the egregious use of that.
OOP makes you model code according to the world or tangible things you think about, but the nature of data, what you are working with, is not suitable for that, a lot of the time your data is intangible to real world models, that is why so many people using OOP descend into a bog of inheritance that just gets worse and worse, and then have to come up with solutions like the factory pattern to solve problems programmers are creating themselves, so many "programming patterns" are solutions to problems caused by OOP. In the end from modeling data this way, it means it'll never be able to hit a cache line, it's just too unwieldy.
OOP complicates code unnecessarily and takes away time, especially people/teams that descend into the madness of things like UML diagrams and all that depressing nonsense, one spends more time obsessing about the code and how to encapsulate data for example when that doesn't solve any of your problems the code needs to solve. Same thing with when it comes to your structure design, people have methods in OOP that are so generic that it can handle so many situations, when you can just design multiple methods that have a clear purpose for each situation, but when things are generalised, trying to understand someone else's code or old code becomes a nightmare, because you look at code that is so deep in OOP that you don't understand what it's actually trying to do at a glance without having to think about all the inheritance, generics, etc and projects get so big that it reaches a point you cannot understand parts of the codebase.
There is a term called "data-oriented design", Mike Acton (formerly at Insomniac) did a talk about this at a CPP Con, it goes over NOT writing code or thinking about problems the OOP way, and as he says in his talk, these are not new ideas - they've been around for decades, but how popular OOP became it's become lost, it's just the insane levels of OOP philosophy that has been ingrained into programmers usually starting at CS in university has created a state of things where a lot of programmers don't know any other way of doing things.
In his talk it focuses about the performance, i.e how to take advantage of the CPU cache by packing and demonstrates how OOP causes cache misses (IIRC, it's been years and only seen the talk once, just quickly skimmed it now). Even if you do not need this performance, it also improves code maintainability by getting out of the depths of hell caused by the vast majority of OOP philosophy, not thinking about code as a platform, but the data, it doesn't matter if you have three methods instead of one generic one that can do a whole bunch of things, the three methods make the code more clear AND allow you to design them to stack data better and actually hit those cache lines.
Yep, I need control of the memory, I can't depend on the GC to do it when it decides to after things are out of scope/not referenced, and it has its own way of cleaning it up that I can't control. A lot of projects that begin to have problems then develop solutions to appease the GC, which to me is ridiculous as you are creating a solution to a problem you shouldn't be having. GC languages are fine but they have a limit, if you need the ability to layout memory as you need it then a GC language is out of the question, and usually when you need control over memory is when you need too push the hardware to its limit, whether that is for games, servers for heavy data processing, etc.99% of the time when I look at other people's code, the slowdowns inherent to what they write come from inefficient memory usage. There's a topic I made a long time ago about cache efficiency that might be worth looking at: https://www.resetera.com/threads/le...ding-how-slow-computer-processes-can-be.4927/
(note -- someone said they could write an entire game in LUA, please look at the above)
Languages with garbage collection and hands-off memory management employ a number of schemes that the programmer doesn't have to worry about, but there is no universal best memory management scheme. Your memory manamgent should be built specifically for the job at hand, it should hand fit the situation to get the most performance out of the system you are creating. In C, for example, I can explicitly declare my cache alignment for a structure, and even declare my alignment for a contiguous block of memory to populate with instances of that structure. I can manually pack my data alignment by simply reordering the declaration of variables inside of my structure. I can, with fine granularity (with respect to the operating system, of course) define exactly where my objects go in memory. All this stuff is supposed to be invisible in a managed language like Java. So, while you can do some of this type of stuff, the language itself fights you along the way. C is made explicitly to expose this stuff to the programmer.
Last edited: