• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

hobblygobbly

Member
Oct 25, 2017
7,580
NORDFRIESLAND, DEUTSCHLAND
May I ask what you use instead of OOP and what kind of software do you write?

Just curious.
There really isn't anything used instead (it's literally just not using OOP), and I write a lot of performance critical tooling from bulk data processing such as imagery/telemetry to visualising a lot of data in graphics simulations (not game development, but the same skill set, graphics are graphics).

Therefore when it comes to performance, cache is everything, and OOP philosophy prohibits that entirely, it's a fact that you cannot pack data to hit those cache lines and layout memory perfectly if you follow OOP philosophy, not a matter of opinion of not liking OOP only. OOP is just not an option for when you need to squeeze performance/hardware to the limit, and that applies to everything that needs it, including games.

Myself and others do not use OOP like people think about or use OOP, structures are not designed the same way, I do not believe or use OOP concepts such as encapsulation, polymorphism, wide use of generics and template metaprogramming, etc or all the crazy stuff the C++ committee thinks about. C++ I work on has absolutely zero template usage for example, which is not at all the state for general C++ projects which is why C++ compile times take so long for many projects because of the egregious use of that.

OOP makes you model code according to the world or tangible things you think about, but the nature of data, what you are working with, is not suitable for that, a lot of the time your data is intangible to real world models, that is why so many people using OOP descend into a bog of inheritance that just gets worse and worse, and then have to come up with solutions like the factory pattern to solve problems programmers are creating themselves, so many "programming patterns" are solutions to problems caused by OOP. In the end from modeling data this way, it means it'll never be able to hit a cache line, it's just too unwieldy.

OOP complicates code unnecessarily and takes away time, especially people/teams that descend into the madness of things like UML diagrams and all that depressing nonsense, one spends more time obsessing about the code and how to encapsulate data for example when that doesn't solve any of your problems the code needs to solve. Same thing with when it comes to your structure design, people have methods in OOP that are so generic that it can handle so many situations, when you can just design multiple methods that have a clear purpose for each situation, but when things are generalised, trying to understand someone else's code or old code becomes a nightmare, because you look at code that is so deep in OOP that you don't understand what it's actually trying to do at a glance without having to think about all the inheritance, generics, etc and projects get so big that it reaches a point you cannot understand parts of the codebase.

There is a term called "data-oriented design", Mike Acton (formerly at Insomniac) did a talk about this at a CPP Con, it goes over NOT writing code or thinking about problems the OOP way, and as he says in his talk, these are not new ideas - they've been around for decades, but how popular OOP became it's become lost, it's just the insane levels of OOP philosophy that has been ingrained into programmers usually starting at CS in university has created a state of things where a lot of programmers don't know any other way of doing things.

In his talk it focuses about the performance, i.e how to take advantage of the CPU cache by packing and demonstrates how OOP causes cache misses (IIRC, it's been years and only seen the talk once, just quickly skimmed it now). Even if you do not need this performance, it also improves code maintainability by getting out of the depths of hell caused by the vast majority of OOP philosophy, not thinking about code as a platform, but the data, it doesn't matter if you have three methods instead of one generic one that can do a whole bunch of things, the three methods make the code more clear AND allow you to design them to stack data better and actually hit those cache lines.



99% of the time when I look at other people's code, the slowdowns inherent to what they write come from inefficient memory usage. There's a topic I made a long time ago about cache efficiency that might be worth looking at: https://www.resetera.com/threads/le...ding-how-slow-computer-processes-can-be.4927/

(note -- someone said they could write an entire game in LUA, please look at the above)

Languages with garbage collection and hands-off memory management employ a number of schemes that the programmer doesn't have to worry about, but there is no universal best memory management scheme. Your memory manamgent should be built specifically for the job at hand, it should hand fit the situation to get the most performance out of the system you are creating. In C, for example, I can explicitly declare my cache alignment for a structure, and even declare my alignment for a contiguous block of memory to populate with instances of that structure. I can manually pack my data alignment by simply reordering the declaration of variables inside of my structure. I can, with fine granularity (with respect to the operating system, of course) define exactly where my objects go in memory. All this stuff is supposed to be invisible in a managed language like Java. So, while you can do some of this type of stuff, the language itself fights you along the way. C is made explicitly to expose this stuff to the programmer.
Yep, I need control of the memory, I can't depend on the GC to do it when it decides to after things are out of scope/not referenced, and it has its own way of cleaning it up that I can't control. A lot of projects that begin to have problems then develop solutions to appease the GC, which to me is ridiculous as you are creating a solution to a problem you shouldn't be having. GC languages are fine but they have a limit, if you need the ability to layout memory as you need it then a GC language is out of the question, and usually when you need control over memory is when you need too push the hardware to its limit, whether that is for games, servers for heavy data processing, etc.
 
Last edited:

Moosichu

Member
Oct 25, 2017
898
That fact that 99% of programmers only target ARM and x86 CPUs, but very few have opened a manual for either architecture is not a good thing.
 

Akelisrain

Member
Oct 30, 2017
2,416
Bel Air MD
Hate to admit, but I switched from Computer Science to Computer Information because i felt stupid when learning. I was close to graduation and felt I didn't learn anything. I took classes in Java, C, C++, Visusl Basic.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
I am all for learning C and C++ to gain a better understanding of the workings of the computer hardware, but it is exactly stuff like this and memory management (and pointers) that increase the complexity and make it difficult for people to learn algorithmic thinking and gain the abstraction skills people need to become good computer scientists. It is a sound strategy to first learn more convenient languages (not those eschewing types though!) to learn to think like a computer scientst before focussing on more technical languages.

Not to be mean, but i dont understand how someone can think "algorithmically" but avoid learning pointers. Pointers are a fundamental part of computer programming, they really arent something optional to learn.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
May I ask what you use instead of OOP and what kind of software do you write?

Just curious.

I wrote a free list memory pool for an engine i made that is ecs-like, but not OOP. The memory pool is a contiguous block of memory allocated where each "subblock" of memory is a fixed size. By fixing the size, i can use a union in C so that each element of "subblock" can either be an actual instance of a system entity, or a pointer to the next block of free memory. This lets me populate my memory pools in order according to how i wish to fill them (z ordered, linear, etc).

The ecs-like feature is that each memory pool i create is for a single system. I.e. a memory pool of renderable instances, a memory pool of collidable instances, etc.

Still organized, but NOT oop.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Not to be mean, but i dont understand how someone can think "algorithmically" but avoid learning pointers. Pointers are a fundamental part of computer programming, they really arent something optional to learn.
Because algorithms don't have anything to do with pointers but with using a computer properly. Algorithms in general are a theoretical approach often and that's why they are usually written in pseudo code at first. You are talking about the actual implementation of an algorithm.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Because algorithms don't have anything to do with pointers but with using a computer properly. Algorithms in general are a theoretical approach often and that's why they are usually written in pseudo code at first. You are talking about the actual implementation of an algorithm.

There are algorithms and patterns that most definitely rely on one understanding what is going on in memory and the knowing difference between accessing some element by reference vs by value. Try and explain the algorithm behind a free list without understanding pointers.
 

spam musubi

Member
Oct 25, 2017
9,381
There are algorithms and patterns that most definitely rely on one understanding what is going on in memory and the knowing difference between accessing some element by reference vs by value. Try and explain the algorithm behind a free list without understanding pointers.

Algorithms are theoretical constructs, pointers are technical domain knowledge. You can know about and talk about algorithms without involving pointers. If you want algorithms based on pointers, then yes, you need to know about pointers.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Algorithms are theoretical constructs, pointers are technical domain knowledge. You can know about and talk about algorithms without involving pointers. If you want algorithms based on pointers, then yes, you need to know about pointers.

The quote in contention is "thinking algorithmically." A part of writing efficient and useful algorithms -- understanding the theoretical benefits and utility of your algorithm -- is understanding HOW your algorithm interacts with your memory. Otherwise, you are going to be writing some piss poor algorithms.

You just cannot convince me that one can think "algorithmically" without having any understanding of the difference between by value and by reference. These are huge parts of understanding the "theory" behind a useful algorithm.
 

spam musubi

Member
Oct 25, 2017
9,381
The quote in contention is "thinking algorithmically." A part of writing efficient and useful algorithms -- understanding the theoretical benefits and utility of your algorithm -- is understanding HOW your algorithm interacts with your memory. Otherwise, you are going to be writing some piss poor algorithms.

You just cannot convince me that one can think "algorithmically" without having any understanding of the difference between by value and by reference. These are huge parts of understanding the "theory" behind a useful algorithm.

Algorithmic theory generally refers to memory complexity, which is more concerned with usage of memory as a function of the size of input. Just like time complexity. Pointers are an implementation detail, and their usage varies from language to language. Algorithmic theory is generally unconcerned with low level details. You're confusing practical programming knowledge and efficiency with algorithmic theory. Yes, one cannot be a good programmer without being concerned with how memory is used, but one can be a good algorithmic thinker without it. Algorithms are generally concerned with more high level concepts. You can come up with a fantasically efficient algorithm that's actually terrible to use in practice, and vice versa.

It's not a matter of me convincing you, it's a matter of you using words that have clear cut definitions improperly. Which I can't help with.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Algorithmic theory generally refers to memory complexity, which is more concerned with usage of memory as a function of the size of input. Just like time complexity. Pointers are an implementation detail, and their usage varies from language to language. Algorithmic theory is generally unconcerned with low level details. You're confusing practical programming knowledge and efficiency with algorithmic theory. Yes, one cannot be a good programmer without being concerned with how memory is used, but one can be a good algorithmic thinker without it. Algorithms are generally concerned with more high level concepts. You can come up with a fantasically efficient algorithm that's actually terrible to use in practice, and vice versa.

It's not a matter of me convincing you, it's a matter of you using words that have clear cut definitions improperly. Which I can't help with.

It's not just an implementation detail, pointer usage is an entirely different operation than accessing something by value. Being entirely different operations and thus different actions available, pointer usage absolutely falls into the realm of "high level concept." You need to understand that the block of memory you are using is NOT a copy of another block of memory, but rather just a reference to it, in order to understand the underlaying algorithm behind, say, recursing a linked list.

There is not shared high level concepts between accessing something by reference or by value. In fact, not understanding that the two are entirely different actions is what leads to problems "thinking algorithmically" later in life.
 

spam musubi

Member
Oct 25, 2017
9,381
It's not just an implementation detail, pointer usage is an entirely different operation than accessing something by value. Being entirely different operations and thus different actions available, pointer usage absolutely falls into the realm of "high level concept." You need to understand that the block of memory you are using is NOT a copy of another block of memory, but rather just a reference to it, in order to understand the underlaying algorithm behind, say, recursing a linked list.

There is not shared high level concepts between accessing something by reference or by value. In fact, not understanding that the two are entirely different actions is what leads to problems "thinking algorithmically" later in life.

Whether a linked list or LRU cache is implemented with pass by value or reference has no bearing on its algorithmic time complexity. It can possibly impact its space complexity, but when speaking about theory, it's generally trivial to exchange the two (after all passing by reference is passing the value of the pointer and the agorithmic complexity of getting the referenced value of a pointer is usually insignificant compared to the algorithmic complexity of the problem you're solving). You're thinking like a programmer and not a computer scientist/mathematician. The traveling salesman problem is NP-hard regardless of whether you use pointers or value. A vast majority of algorithmic literature is generally unconcerned with this. And the people who primarily care about these things don't work primarily with pointers, so it doesn't really matter. A good programmer needs to understand both, but algorithms are abstract.

I can't really reason with you because you're just conflating two separate things.
 

Threadkular

Member
Dec 29, 2017
2,421
I'm surprised Jonathan Blow doesn't work at a university or something. Does he still do all this off his profits from his games?
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Whether a linked list or LRU cache is implemented with pass by value or reference has no bearing on its algorithmic time complexity. It can possibly impact its space complexity, but when speaking about theory, it's generally trivial to exchange the two (after all passing by reference is passing the value of the pointer and the agorithmic complexity of getting the referenced value of a pointer is usually insignificant compared to the algorithmic complexity of the problem you're solving). You're thinking like a programmer and not a computer scientist/mathematician. The traveling salesman problem is NP-hard regardless of whether you use pointers or value. A vast majority of algorithmic literature is generally unconcerned with this. And the people who primarily care about these things don't work primarily with pointers, so it doesn't really matter. A good programmer needs to understand both, but algorithms are abstract.

I can't really reason with you because you're just conflating two separate things.

The underlying order and memory usage absolutely affects the algorithmic time complexity. Going back to my example of a free list implimented in my engine, the union in memory is precisely what keeps the algorithmic time complexity down. It turns an unordered sort into a linear sort, from worst case o(N) to worst case o(1).
 

spam musubi

Member
Oct 25, 2017
9,381
The underlying order and memory usage absolutely affects the algorithmic time complexity. Going back to my example of a free list implimented in my engine, the union in memory is precisely what keeps the algorithmic time complexity down. It turns an unordered sort into a linear sort, from worst case o(N) to worst case o(1).

Algorithms that trade space management during allocation instead of searching/access aren't uncommon. See LRU cache.

If you're concerned about the O(k) where k is the size of the object of memory and not the number of objects (which is normally what people use N for) then yes, that should be a part of your algorithm. But in my experience most algorithms just pass by reference and assume an O(kN) array exists of the data to begin with, and assume N >> k so O(kN) ~ O(N). But this is getting pretty specific, and can be talked about as domain knowledge for the particular problem and not general algorithmic knowledge. I doubt you'd find someone who is this versed in algorithms who doesn't have a cursory understanding of pointers anyway.
 

Heckler456

Banned
Oct 25, 2017
5,256
Belgium
Hate to admit, but I switched from Computer Science to Computer Information because i felt stupid when learning. I was close to graduation and felt I didn't learn anything. I took classes in Java, C, C++, Visusl Basic.
Could you go into some more depth on that? I'm picking up CS this fall (at 28), and reading stuff like this kinda worries me.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
Algorithms that trade space management during allocation instead of searching/access aren't uncommon. See LRU cache.

If you're concerned about the O(k) where k is the size of the object of memory and not the number of objects (which is normally what people use N for) then yes, that should be a part of your algorithm. But in my experience most algorithms just pass by reference and assume an O(kN) array exists of the data to begin with, and assume N >> k so O(kN) ~ O(N). But this is getting pretty specific, and can be talked about as domain knowledge for the particular problem and not general algorithmic knowledge. I doubt you'd find someone who is this versed in algorithms who doesn't have a cursory understanding of pointers anyway.

the entire point of the conversation was about someone talking about "learning" algorithms without understanding pointers. Which was what began the entire conversation -- you're going to need to understand pointers if you want to really, truly think algorithmically. It's not something really optional. Pointers are a fundamental part of computer programming. Without learning pointers, you will limit what you can do with algorithms.
 

JeTmAn

Banned
Oct 25, 2017
3,825
the entire point of the conversation was about someone talking about "learning" algorithms without understanding pointers. Which was what began the entire conversation -- you're going to need to understand pointers if you want to really, truly think algorithmically. It's not something really optional. Pointers are a fundamental part of computer programming. Without learning pointers, you will limit what you can do with algorithms.

You've been listing edge cases where pointer knowledge is essential, but that doesn't mean it's essential to "thinking algorithmically". How many core CS algorithms can you think of which necessitate the use of pointers rather than some abstraction for value stores?

This isn't to say that pointers aren't essential programming knowledge, but they are an implementation detail and algorithms shouldn't concern themselves with that whenever possible.
 

spam musubi

Member
Oct 25, 2017
9,381
the entire point of the conversation was about someone talking about "learning" algorithms without understanding pointers. Which was what began the entire conversation -- you're going to need to understand pointers if you want to really, truly think algorithmically. It's not something really optional. Pointers are a fundamental part of computer programming. Without learning pointers, you will limit what you can do with algorithms.

Pointers are a fundamental concept of programming for sure, but not necessarily a fundamental concept of algorithms. Either way, I don't think this is necessarily limiting if you want to be a programmer, considering the top languages used in the industry are, in order, Java, Python and JS, which don't really care to make that distinction: https://www.codingdojo.com/blog/7-most-in-demand-programming-languages-of-2018/

Depends on the job, and as a person with a PhD in CS who taught theory of computation and a SV job, I think we'd all be better off knowing about pointers, but the reality is that it's absolutely not a requirement to get a job, even a top one.
 

Deleted member 12790

User requested account closure
Banned
Oct 27, 2017
24,537
You've been listing edge cases where pointer knowledge is essential, but that doesn't mean it's essential to "thinking algorithmically". How many core CS algorithms can you think of which necessitate the use of pointers rather than some abstraction for value stores?

I can think of, and have implemented, lots within the domain of game development. There are numerous design patterns that rely on pointers for their underlaying logic
 

inpHilltr8r

Member
Oct 27, 2017
3,256
theory vs practice

research vs development

academia vs actually making things

you're doing it wrong vs at least i'm doing something
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
The quote in contention is "thinking algorithmically." A part of writing efficient and useful algorithms -- understanding the theoretical benefits and utility of your algorithm -- is understanding HOW your algorithm interacts with your memory. Otherwise, you are going to be writing some piss poor algorithms.

You just cannot convince me that one can think "algorithmically" without having any understanding of the difference between by value and by reference. These are huge parts of understanding the "theory" behind a useful algorithm.
If there is no way of convincing you, why even discuss? But in general we have different levels of complexity for algorithms, completely independent of how a computer actually works, only a turing machine is needed for that.
If you have two algorithms solving the same problem and one has a complexity of O(n^2) and the other one is in O(n) the second one will always be faster, no matter how bad you deal with your memory.
 
Oct 26, 2017
20,440
OK so I'm not a programmer but

My understanding of programming is that a lot of it is just Googling online when you run into an error? And with mainstream programming languages this is very easy to do.

... Not sure how much online information Blow's language will have.
 

Dinjoralo

Member
Oct 25, 2017
9,167
There really isn't anything used instead (it's literally just not using OOP), and I write a lot of performance critical tooling from bulk data processing such as imagery/telemetry to visualising a lot of data in graphics simulations (not game development, but the same skill set, graphics are graphics).

Therefore when it comes to performance, cache is everything, and OOP philosophy prohibits that entirely, it's a fact that you cannot pack data to hit those cache lines and layout memory perfectly if you follow OOP philosophy, not a matter of opinion of not liking OOP only. OOP is just not an option for when you need to squeeze performance/hardware to the limit, and that applies to everything that needs it, including games.

Myself and others do not use OOP like people think about or use OOP, structures are not designed the same way, I do not believe or use OOP concepts such as encapsulation, polymorphism, wide use of generics and template metaprogramming, etc or all the crazy stuff the C++ committee thinks about. C++ I work on has absolutely zero template usage for example, which is not at all the state for general C++ projects which is why C++ compile times take so long for many projects because of the egregious use of that.

OOP makes you model code according to the world or tangible things you think about, but the nature of data, what you are working with, is not suitable for that, a lot of the time your data is intangible to real world models, that is why so many people using OOP descend into a bog of inheritance that just gets worse and worse, and then have to come up with solutions like the factory pattern to solve problems programmers are creating themselves, so many "programming patterns" are solutions to problems caused by OOP. In the end from modeling data this way, it means it'll never be able to hit a cache line, it's just too unwieldy.

OOP complicates code unnecessarily and takes away time, especially people/teams that descend into the madness of things like UML diagrams and all that depressing nonsense, one spends more time obsessing about the code and how to encapsulate data for example when that doesn't solve any of your problems the code needs to solve. Same thing with when it comes to your structure design, people have methods in OOP that are so generic that it can handle so many situations, when you can just design multiple methods that have a clear purpose for each situation, but when things are generalised, trying to understand someone else's code or old code becomes a nightmare, because you look at code that is so deep in OOP that you don't understand what it's actually trying to do at a glance without having to think about all the inheritance, generics, etc and projects get so big that it reaches a point you cannot understand parts of the codebase.

There is a term called "data-oriented design", Mike Acton (formerly at Insomniac) did a talk about this at a CPP Con, it goes over NOT writing code or thinking about problems the OOP way, and as he says in his talk, these are not new ideas - they've been around for decades, but how popular OOP became it's become lost, it's just the insane levels of OOP philosophy that has been ingrained into programmers usually starting at CS in university has created a state of things where a lot of programmers don't know any other way of doing things.

In his talk it focuses about the performance, i.e how to take advantage of the CPU cache by packing and demonstrates how OOP causes cache misses (IIRC, it's been years and only seen the talk once, just quickly skimmed it now). Even if you do not need this performance, it also improves code maintainability by getting out of the drudges caused by the vast majority of OOP philosophy, not thinking about code as a platform, but the data, it doesn't matter if you have three methods instead of one generic one that can do a whole bunch of things, the three methods make the code more clear AND allow you to better stack and not have CPU cache misses.




Yep, I need control of the memory, I can't depend on the GC to do it when it decides to after things are out of scope/not referenced, and it has its own way of cleaning it up that I can't control. A lot of projects that begin to have problems then develop solutions to appease the GC, which to me is ridiculous as you are creating a solution to a problem you shouldn't be having. GC languages are fine but they have a limit, if you need the ability to layout memory as you need it then a GC language is out of the question, and usually when you need control over memory is when you need too push the hardware to its limit, whether that is for games, servers for heavy data processing, etc.

This post validates my feelings about OOP when I had to learn it in college.
 

spam musubi

Member
Oct 25, 2017
9,381
OK so I'm not a programmer but

My understanding of programming is that a lot of it is just Googling online when you run into an error? And with mainstream programming languages this is very easy to do.

... Not sure how much online information Blow's language will have.

Yeah, lack of easily available support is the killer of any new language/tool. His best bet would be to open source it and let the community iterate on it.
 

SapientWolf

Member
Nov 6, 2017
6,565
I think C/C++'s pointer syntax makes them more difficult for new programmers to learn than it should be. The underlying concept isn't as Byzantine as the code can be.
 

JeTmAn

Banned
Oct 25, 2017
3,825
I think C/C++'s pointer syntax makes them more difficult for new programmers to learn than it should be. The underlying concept isn't as Byzantine as the code can be.

I don't think it's explained well. Just draw a bunch of little houses with numbers inside for values and numbers outside for addresses. Then a piece of mail with an address on it.
 

Trumpets

Banned
Jan 7, 2018
46
C# is already excellent for games, but the more the merrier I guess.

Just so long as he doesn't try and add a pretentious story to it.
 

Akelisrain

Member
Oct 30, 2017
2,416
Bel Air MD
Could you go into some more depth on that? I'm picking up CS this fall (at 28), and reading stuff like this kinda worries me.
It was most likely just my experience. Even though I was completing projects, I never felt that I grapsed the work I would complete. I struggled to put my thoughts into the code, and felt that I wasn't mentally strong enough to grasp abstract concepts. Eventually I quit and decided to change Major. However, I still find myself reading books on coding. Maybe one I will start working on my Double Dragon 2 spiritual successor.
 

Aether

Member
Jan 6, 2018
4,421
So someone who creates an algorithm for an problem, implements it in java, and leaves it at that never learned to think algorithmically?
Sure, pointers are not a complex subject to learn, and every good school/college/university should at least mention them and how they work,
but you are amusing from your specific use case that it should be that for everyone.

I'm quite confident when i say, most engineers don't develop that close to the metal. Java is not without reason the default language that most people learn. Often the time and money spend developing that efficient is not representative of the gain in performance, especially if its stuff that's not used constantly.
Java/Web/C#/etc-Developers don't need pointers, and many will never use them again outside of school, because they only develop for frameworks, existing products, web applications, and so on.

@Topic:
Let him do what he wants. I don't believe that it will have a big effect, but it doesn't trouble anybody.To me it seems that he more or less would like a wrapper to essentially hides all the ugly stuff of c++ that he doesn't need for game developing while adding stuff that he personally needs, and with that aiding his work flow.If his developing preferences align with a lot of game developers, this could work well, if not, no one will adapt it and it will be pointless. (nobody likes code that non outside of the business can understand)

(Personally I'm not the biggest fan of the idea, since I developed 5 years with a custom language, and while it made a lot of stuff easier (developing for our product), it is useless outside of the specific context of that business.


Regarding OOP: It has its flaws, shure. Bit ones actually. But in many big projects, especially ones that are just given to the next programmer, it usually is easier to understand (as long as its not obsessive OOP and to the textbook, since sometimes a view lines can spear you a view new classes)
 

Yoshi

Banned
Oct 27, 2017
2,055
Germany
Not to be mean, but i dont understand how someone can think "algorithmically" but avoid learning pointers. Pointers are a fundamental part of computer programming, they really arent something optional to learn.
Algorithmic thinking is not dependent on computer programming and the specifics of pointers are (in most cases) not integral to it. You cannot program well without thinking algorithmically, but you can very well be competent in algorithmic thinking but not well-versed in programming at the same time. Thinking up abstract solutions to program classes actually more often than not requires that you abstract away from the possible implementation. It is not coincidental that algorithms are usually stated in languages that offer primitives that go way beyond any programming language. This kind of problem-oriented planful thinking, if you start training it, is in many cases hampered if you require it be expressed in a very technical programming language.

Of course, more comfortable languages are not exactly the ideal language to express algorithms either, but the extrinsic load is certainly lower, offering more room for growth.

I did not want to say you should avoid pointers at all, but the abstraction offered by Java or C# is a good compromise between technical detail and mathematics. For a programmer I'd trust, I'd expect a working knowledge of the more intricate details, including pointers, because it is of course important for careful program design and error fixing even in more modern languages. However, avoiding this additional layer of stuff that requires your attention when developing for a modern system (and on a budget, of course this is different when you are working on an AAA game for Ubisoft, where every bit of juice must be pressed out of the console) is a good choice from my perspective.
 

Deleted member 41271

User requested account closure
Banned
Mar 21, 2018
2,258
Blow made great games. Some of his comments on puzzle games shaped my own way of *thinking* about puzzle game design. The talks with the Migakugure dev are really great, and I really see similar things in puzzle game designs than he does, thanks to his work.
I even loved The Witness *a lot*.


At the same time, saying that he is a misogynist isn't an accusation, it's just a fact, something that became known very recently in uncertain terms. He is blaming the "lack" of interest women supposedly have in programming on biology, despite people carefully explaining to him that women in tech tend to face harassment literally from the moment they enter the field. It's one of the most toxic work and education places for women in the west *in general*.

Some people are very convinced this is absolutely not a factor, and those people tend to be almost always guys. The arguments for that tend to be, oddly, very similar among gamergaters waxing on and on how women don't get games, it's very weird to see people suddenly switch gears and fall right into their camp when it comes to the topic that women just aren't into tech, and harassent TOTALLY is no issue at all nope no no nope nope nope!

Quite simply, programming had many women do the groundwork. Then men pushed women out. This isn't even debatable, it's simply history. And *that* is why we are where we are. Women are harassed by peers, professors or teachers mock women, women are told they're too stupid to make it anyway, and office cultures in the field have shown in the latest years how often men make it absolutely horrible for women to get anything done in tech. Yeah, sure, when women get laughed at by a teacher for even joining a Informatik (programming at school for Germans) class, it's definitely feeble lady brains having troubles. *nod*


Look, sorry, I originally wanted to write a post on Blows actual language, but there's way too much apologia for his point (and the usual women totally couldn't be into it, we'll never know! How would anyone know? Testosterone! -stuff that goes along with it). Makes one cranky.
Me, I'm using c#, hate object programming like a passion, and definitely think a lot of things could go easier, but I don't quite think Blow's the savior here. A lot of the issues go deeper - the xkcd one on standards really seems to hit the mark there.

What we really need is not more languages, but less, with good frameworks to get *away* from the languages, especially for games, to "plug in" artists and designers better than we do *without* them having to learn every dang implementation because some genius had his custom idea that's completely different from what the next guy in the next company is running.

Unity, I feel, is a much more useable step. Not the best, certainly (lol garbage collector), but a step in the right direction for sure.
 

Aether

Member
Jan 6, 2018
4,421
...
At the same time, saying that he is a misogynist isn't an accusation, it's just a fact, something that became known very recently in uncertain terms. He is blaming the "lack" of interest women supposedly have in programming on biology, despite people carefully explaining to him that women in tech tend to face harassment literally from the moment they enter the field. It's one of the most toxic work and education places for women in the west *in general*.

Some people are very convinced this is absolutely not a factor, and those people tend to be almost always guys. The arguments for that tend to be, oddly, very similar among gamergaters waxing on and on how women don't get games, it's very weird to see people suddenly switch gears and fall right into their camp when it comes to the topic that women just aren't into tech, and harassent TOTALLY is no issue at all nope no no nope nope nope!

Quite simply, programming had many women do the groundwork. Then men pushed women out. This isn't even debatable, it's simply history. And *that* is why we are where we are. Women are harassed by peers, professors or teachers mock women, women are told they're too stupid to make it anyway, and office cultures in the field have shown in the latest years how often men make it absolutely horrible for women to get anything done in tech. Yeah, sure, when women get laughed at by a teacher for even joining a Informatik (programming at school for Germans) class, it's definitely feeble lady brains having troubles. *nod*

Look, sorry, I originally wanted to write a post on Blows actual language, but there's way too much apologia for his point (and the usual women totally couldn't be into it, we'll never know! How would anyone know? Testosterone! -stuff that goes along with it). Makes one cranky.
Me, I'm using c#, hate object programming like a passion, and definitely think a lot of things could go easier, but I don't quite think Blow's the savior here. A lot of the issues go deeper - the xkcd one on standards really seems to hit the mark there.

What we really need is not more languages, but less, with good frameworks to get *away* from the languages, especially for games, to "plug in" artists and designers better than we do *without* them having to learn every dang implementation because some genius had his custom idea that's completely different from what the next guy in the next company is running.

Unity, I feel, is a much more useable step. Not the best, certainly (lol garbage collector), but a step in the right direction for sure.

I think you're mostly right. Maybe I've got luck, but in my former workplace and now at the university, there is almost no
discouraging woman, which is a great thing. I also have enough female friends that are programmer that feel valued and equally valued. I think it is getting way better, and is already way better than in other fields from what i've heard (i.e. Mechatronics).
All that said, I've also heard bad storys from good friends, and some of the old professors in school were not there yet, and some of the students that start in Computer Science are a disaster regarding this. And i know that my area here is not representative for other places.
So disregarding the criticism and just brushing the problems off as if they don't exist is just arrogant and like you say, almost always a thing that men do. Jonathan blow is the best example.

All that sad, in this thread I'm still more interested in opinions on the programming situation, and if someone has better arguments for his approach.
As it stands, for me it looks like "i want to work how i work, and dont want to be bothered by stuff that others need". Good for him, but not so much for the industry, not many projects will be handled by a small team of programmers.
 

Myself

Member
Nov 4, 2017
1,282
I can agree that the language, if you try and learn it from C roots upwards to it's current C++ 17 and 20 is complicated and convoluted mess that will have you wanting to scream (And I have 20+ years commercial experience with it) . But C++ is moving away from that as fast as they can. Bjarne Stroustrup (Inventor) talks about this ALL the time and how C++ is too crazy for beginners. He also talks about how there's just too few libraries and the packaging system sucks etc. One thing I think C++ needs is big backers to fund the experts fulltime so we can have decent packaging, deployment and libraries for domains such as games. No one really wants to write Yet Another Work Stealing Job Queue; the language needs that shit as standard or at least a way to get expert level packages and integrate with them easily. VCPKG is quite good in this regard but it's low level, i.e. gives you the lib and source/headers with no real integration.

Even with C++'s failings and the fact I'd like a better high-performance, low pain language, I'm not sure a new language by some dude that gives you 15% productivity is going to help.
 

1-D_FE

Member
Oct 27, 2017
8,275
I never followed Unity's own tutorials, but this is fucking shitty (and entirely unsurprising that, coupled with its pretty terrible GC, results in Unity's reputation as unoptimized).

Actually, going deeper, I see zero reason Unity doesn't come with an integrated pooling solution by default. The GameObject class should come with pooling methods like preloading X instances of a prefab to the pool on scene load (equally important unless you want a hiccup when you suddenly need 100 bullets), and a dispose / remove method that simply disables the GameObject and repools it. Admittedly, I haven't given this more thought than the time it took to write this post, so there might be obvious reasons I'm not seeing why Unity shouldn't do this.

Just a ton of shitty Unity tutorials out there (even beyond Unity's official ones). I remember when I first start playing around with Unity, I heard good things about this book:

https://www.amazon.com/gp/product/B00LIYS9F0/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

One of the first projects in that book, he's creating a game where the score + high score are being updated every frame in Update. And he was concatenating strings for both. So if the game ran at 60fps, he was concatenating 120 strings per second (regardless of whether any score had changed or not). This is a book that was selling him as being a professor at a leading school. No mention at all in the book about GC either.

Obviously Unity creates garbage with string concatenations too. So yeah, there's some hilariously bad stuff out there that contributes to the GC issue with Unity.
 

Weltall Zero

Game Developer
Banned
Oct 26, 2017
19,343
Madrid
Just a ton of shitty Unity tutorials out there (even beyond Unity's official ones). I remember when I first start playing around with Unity, I heard good things about this book:

https://www.amazon.com/gp/product/B00LIYS9F0/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

One of the first projects in that book, he's creating a game where the score + high score are being updated every frame in Update. And he was concatenating strings for both. So if the game ran at 60fps, he was concatenating 120 strings per second (regardless of whether any score had changed or not).

Holy shit. Even if he's clueless about string operations, how did he not see it in the profiler?
...
I'm deluding myself if I'm thinking this dude knew what the profiler is, am I not? :/
 

Bjones

Member
Oct 30, 2017
5,622
Godot probably has one of the best self scripting language I've seen. Not so much the coding itself but the way Godot is built it's not nearly as limiting as others I've seen.
 

PlatStrat

Member
Oct 27, 2017
565
I'm going to throw my two cents in and say that while C# may not be the best language for game development, I will take it over C++ for pretty much anything else. I can't imagine trying to write a website using C++. Especially in today's businesses where everything is moving client side.
 

filkry

Member
Oct 25, 2017
1,893
If there is no way of convincing you, why even discuss? But in general we have different levels of complexity for algorithms, completely independent of how a computer actually works, only a turing machine is needed for that.
If you have two algorithms solving the same problem and one has a complexity of O(n^2) and the other one is in O(n) the second one will always be faster, no matter how bad you deal with your memory.

Not sure if I'm exactly replying to your point, but this isn't true. For "small" datasets (and many common problems in game development are small), "slow" linear algorithms through nice, in-order memory are faster than say, log n algorithms that require jumping around non predictably.

I bet I could also make a n^2 program on a data set of say 100 that is faster than an O(n) program that involves non continuous pointer hopping.

For many practical cases memory organization beats algorithmic complexity for performance.

That said, for an operation that we do thousands of times a frame, I did recently get a speed improvement by replacing an n^2 operation with an O(n) pointer hop.
 

Arebours

Member
Oct 27, 2017
2,656
So someone who creates an algorithm for an problem, implements it in java, and leaves it at that never learned to think algorithmically?
Sure, pointers are not a complex subject to learn, and every good school/college/university should at least mention them and how they work,
but you are amusing from your specific use case that it should be that for everyone.
Sure they do, but algorithmic complexity has limited relevance to code that runs on actual hardware, which afaik all code does. It's great if you are into researching theory but it happens all the time that the "best" algorithms are far from the fastest when it's time for actual implementation. If you think software is a real platform then you never learn that lesson and will forever be a bad programmer. To write fast code you need to understand the machine there is no way around that.

Software is getting slower more quickly than computers are getting faster today. This is important.
 

Qwark

Member
Oct 27, 2017
8,038
Could you go into some more depth on that? I'm picking up CS this fall (at 28), and reading stuff like this kinda worries me.
Not him, but I had a similar experience. I graduated in computer science, but I feel like I didn't remember the things that would be the most helpful for me now. I learned a lot of theoretical junk (that I still don't totally understand), but honestly I never use it in my day to day job. The practical courses were by far the most valuable for me. One thing I will say is, pay attention in your database courses, that stuff comes up so frequently and I really wish I had paid more attention in those courses.
 

Qassim

Member
Oct 25, 2017
1,532
United Kingdom
OK so I'm not a programmer but

My understanding of programming is that a lot of it is just Googling online when you run into an error? And with mainstream programming languages this is very easy to do.

... Not sure how much online information Blow's language will have.

Not really, no. Unless it's unexpected behaviour, that's usually done by people who don't know how to use a debugger, or read API documentation. Understanding languages and APIs are only part of programming, those are just the tools to solve your problem. It's like, you may have a complicated bit of machinery with thousands of different functions, you don't necessarily know how they all work and how to use them, and memorising it all isn't particularly useful - so you might google for an example of how someone else used it to help you understand how to use that function of the machine.

The hard part of programming isn't the languages, frameworks or APIs as it is the the actual application of those things. Solving problems, designing systems, etc with all the different variables at play, etc.

Regardless, as has been said - this language seems be primarily designed for Jonathan Blow himself, if he designed it - he won't need to google how to use it :p
 
Oct 27, 2017
1,393
Regarding OOP: It has its flaws, shure. Bit ones actually. But in many big projects, especially ones that are just given to the next programmer, it usually is easier to understand (as long as its not obsessive OOP and to the textbook, since sometimes a view lines can spear you a view new classes)
This. As a programmer that works at a company where devs get shifted between large projects a lot, we focus more on easy to understand code rather than the best performance. It also helps that our projects aren't as performance dependent as something like video games. Obviously OOP can be taken too far (and some people do), but it's a tool to be used based on the situation.
 

Bryo4321

Member
Nov 20, 2017
1,518
Could you go into some more depth on that? I'm picking up CS this fall (at 28), and reading stuff like this kinda worries me.
You will have to sit down and learn it. If you put in the time you will be fine. Don't let it scare you. CS has a lot of silly technical terms that aren't as complicated as they sound.