I am a strong proponent of not using anything that forces you to use OOP as "baby's 1st language". Well I see OOP as a heresy in general tho. To me programming is about algorithms and data structures and data flow. Not some strange concept of OOP that does not even have an exact definition and you can see some extremists claiming that literally everything is OOP.
My first true language was learning C in college, I'd definitely do that again. When I switched to CS everything was scheme (lisp) based which I actually preferred to Python and later on was mostly Java based. I still hate Java but it's gotten me through interview / interview screens so I'm okay with it now.
For me, first language should be strict and not too loose with the rules (so not python). Once you've got the rules down and understand why they are there / why certain languages use those constructs - build from there.
But take all of this with a grain of salt because I was a very distracted student more interested in working for startups during college than graduating with a nice GPA ;)
I don't know what "heresy in general" means regarding a programming language feature, but I dislike OOP in Java and C++ code where I have encountered it.
The largest code base I love and feel productive in is Linux kernel which is mostly C and assembly, medium sized project of 30 million lines, probably less than ten thousand active developers (although difficult to judge with the way development on certain features or functions can occur for years on private repositories before being merged), I would say relatively high rate of change for a project of the size.
It frequently has structures containing objects of different types that have to identify themselves so they can be processed in different ways. This general pattern, and function pointers in C predates formal "OOP" of course, so I don't call it manually-written vtables. But yes it has this and it is a good technique.
OOP languages which do this automatically for you is AFAIKS just a thin and often clunky layer of syntactic sugar on top of it.
> just a thin and often clunky layer of syntactic sugar on top of it.
That sugar does help.
C allows adding another function pointer to that structure, but not set that pointer. C++ compiler forces programmers to implement all interface methods. Forget to override a method, the code which instantiates the class won’t compile complaining about not being able to instantiate an abstract class.
Code navigation is another thing. In visual studio, while the cursor is over an abstract method, F12 key looks up all implementations of the abstract class, and populates the “find symbol results” panel with a clickable list of the implementations of the method. With visual assist addon installed, Alt+G key does the same only presents the results in a popup menu instead of a separate panel.
Both things are borderline useless for small projects, but IMO they help a lot for medium to large ones, especially developed by multiple people.
I've never found it very compelling. As I said there are also downsides, inflexible implementation that is implementation specific (so it can be difficult to manipulate with low level assembly).
> C allows adding another function pointer to that structure, but not set that pointer. C++ compiler forces programmers to implement all interface methods. Forget to override a method, the code which instantiates the class won’t compile complaining about not being able to instantiate an abstract class.
Never found that particularly helpful if the code is structured well. You'd either allow for NULL implementations to be default or error not implemented at least until all subsystems are converted, or the initialization functions that all object allocations should call (because the code is well written) can verify all required fields are set. If you want to be even cleverer, you can probably do static initialization checks at least where your fields are constant and have those compile down to nothing just checked at compile time.
> Code navigation is another thing. In visual studio, while the cursor is over an abstract method, F12 key looks up all implementations of the abstract class, and populates the “find symbol results” panel with a clickable list of the implementations of the method. With visual assist addon installed, Alt+G key does the same only presents the results in a popup menu instead of a separate panel.
I can see how that might help a little, although surely with some minimal scripting a symbol browsing tool should be able to be taught about similar patterns like find all functions that are assigned to this particular member of a structure of function pointers. Although I don't use IDEs or any symbol tagging tools just grep usually, so maybe I'm a luddite.
With a nice code base that follows reasonable conventions and naming, it's pretty easy to find e.g., if you have a structure-of-function-pointers style of thing then you can find all definitions of "struct address_space_operations" or if a function pointer member is called page_mkwrite, then you search for *_page_mkwrite and get ext4_page_mkwrite, xfs_page_mkwrite, btrfs_page_mkwrite, etc. (which are not always strictly enforced in Linux but at least if you are searching for \.page_mkwrite you can usually easily see non-confirming names).
> Both things are borderline useless for small projects, but IMO they help a lot for medium to large ones, especially developed by multiple people.
I don't see that it helps a lot, and even as syntactic sugar I don't see it being a big advancement in the scheme of things.
> Never found that particularly helpful if the code is structured well
IMO, all else being equal, errors detectable at compile-time should be detected at compile-time. Following reasons. (1) No matter the circumstances like time pressure, compiler errors are impossible to ignore, because they fail the build (2) Compiler errors are easier to detect automatically, e.g. many projects have automatic build systems. Even if the project has automatically run unit tests, people still need to write these tests first, i.e. the process is not completely automatic. (3) Runtime checks have runtime costs. Even if the checks are in static initializers or similar, that’s still runtime cost at startup and/or first use. Compile-time checks are free at runtime, makes the code faster.
> although surely with some minimal scripting a symbol browsing tool should be able to be taught about similar patterns
Technically, yes. Practically, two things. If you’re part of a team of developers, in particular if that team is remote, people are using their own computers, and people have varying levels of programming experience, such scripting tools are hard to implement. Another thing, the time spent making and supporting these tools is the time not spent improving the product being built.
> IMO, all else being equal, errors detectable at compile-time should be detected at compile-time.
I agree and you can do that with C. See compiletime_assert in Linux. An ops table definition or registration macro can do compile time checking that such fields are populated.
> Technically, yes. Practically, two things. If you’re part of a team of developers, in particular if that team is remote, people are using their own computers, and people have varying levels of programming experience, such scripting tools are hard to implement. Another thing, the time spent making and supporting these tools is the time not spent improving the product being built.
In all teams I've been part of including the loosely coupled and highly distributed open source side of kernel development, tools are widely shared and developed together. People don't just get left out in the wilderness. The bigger the project the more true this is.
I get that standard IDE niceness exists for some OOP language features, so it might be some small practical advantage, it just isn't an inherent advantage of the OOP paradigm. We don't need to use C++ to get nice rich and context aware editing search/display.
> An ops table definition or registration macro can do compile time checking
If programmers writes that check, yes. Same applies to a unit test. C++ compiler gonna verify that thing automatically, no extra work required, neither initially, not over the lifetime of the project.
> tools are widely shared and developed together
I prefer when a freshly cloned repository builds without any extra tools on a freshly installed computer, with either F7 in the correct version of Visual Studio, or something like cmake ../ && make in Linux shell, after installing the required dependencies from the official package repository of that Linux. In my experience, custom tools are often a pain in the long run.
> it just isn't an inherent advantage of the OOP paradigm
The paradigm is orthogonal to programming languages. I think OOP is merely a high-level design pattern where objects keeping their private state only accessible/modifiable by calling methods of these objects.
In this sense, the source code linked above in this thread is 100% OOP, despite plain C. Just like the majority of Linux APIs which operate on opaque handles: open/close/read/write for files, snd_pcm_* functions for ALSA, all these IOCTL requests for V4L2, various handles for Vulkan. All these APIs are implementing an abstraction over external devices with very complicated internal mutable state, OOP is pretty much the only way to go for these things.
> If programmers writes that check, yes. Same applies to a unit test. C++ compiler gonna verify that thing automatically, no extra work required, neither initially, not over the lifetime of the project.
Yes. Programmers have to write these checks, that's all about what a well structured and maintained codebase is about. They have to write many checks no matter what the language, because missing initializers is one tiny little aspect of things you might want to check for.
> I prefer when a freshly cloned repository builds without any extra tools on a freshly installed computer, with either F7 in the correct version of Visual Studio, or something like cmake ../ && make in Linux shell, after installing the required dependencies from the official package repository of that Linux. In my experience, custom tools are often a pain in the long run.
I didn't suggest otherwise, I was talking about editing and searching scripts and commands.
> The paradigm is orthogonal to programming languages.
Right, C can do it. As I said in the beginning, I don't like the OOP features of C++ because they're clunky inflexible and hardly any benefit in terms of easier syntax.
> I think OOP is merely a high-level design pattern where objects keeping their private state only accessible/modifiable by calling methods of these objects.
No "OOP" is definitely considered to be language features too.
File descriptors are fundamentally polymorphic, they can be files, pipes, sockets, signalfd, eventfd and the list is ever growing, everything is a file.
To that end, I’ve left out a lot of the advanced features (so far), such as:
Templates
Type inference
Operator overloading
Overloading in general!
__traits
Classes/Interfaces/OOP
Pointers/references (mostly)
Memory management in general (thank you GC!)
I have no teaching experience to back it up yet I've always felt this approach of intentionally leaving out very nifty, useful, and sometimes important bits of knowledge is bad. For sure, in the scope of one class there's not enough time to thoroughly teach everything and test on it, tradeoffs have to be made, but I'd still rather instructors explicitly mention such things as briefly as they can get away with (but three times) just with the expectation that not everyone is going to really even 'get' them and it won't be on any tests. People very much go with what they know (you'll find bubble sort in production software, alas) and if you don't even give them a sign for "here's something you might want to explore on your own or come back to if you run into it again later" most won't set foot beyond their present understanding. The worst offender in this sort of omission I think comes from the excellent SICP book -- it's not the book's fault, it's not even really trying to "teach Lisp", but people nevertheless use the book and as a side effect learn a bit of Scheme, which they confuse (because no one told them otherwise) for thinking that they now know Lisp. But they don't. In order to know Lisp one needs to learn Common Lisp with its macros, its OOP, its condition system, its types (which SBCL can check at compile time and use for more optimized assembly), its packages, its approach to interactive development... It's the same issue as high school C++ courses (I assume some of them are still around!) teaching it as C with classes, and never hearing about (let alone touching), say, templates, let alone any of the modern C++0x and on features.
As someone who had a Clojure phase about 9 years ago: the actual problem with modern Lisps is that they don't really offer a big set of unique good features. The basic functional map-filter-reduce "meat grinder" is everywhere (and has been almost everywhere for a long time, Java not having lambdas for ages is what clouded everyone's vision into thinking that's not the case). The other "Lisp feature" is macro metaprogramming based on homoiconicity (Code Is Data™) and we have it in some non-Lisps too now (hello Elixir) and… is it really the best way to do metaprogramming? Ehh. Well, many compiled&typed languages actually do similar-ish things: using syn in Rust proc macros / haskell-src-exts in Template Haskell you can work with code "as data". And that isn't as nice to work with as D's `static foreach/if` + reflection/introspecion thingies like allMembers/getMember.
Clojure is a sort of baby’s first lisp. That’s not a bad thing by the way. Its relative simplicity and restrictiveness are usually beneficial for corporate environments and I like working in it.
That said, if you want to talk about lisp you should really talk about Common Lisp. And there are a lot of solid criticisms of Common Lisp, but not enough features isn’t one I’ve heard before. The LOOP facility alone has an absurd number of features, to say nothing of the Meta Object Protocol.
It’s fair to say that a proficient Lisper can write performant code in any set of paradigms you like using a modern Common Lisp implementation. That comes with some considerable trade-offs though.
If people feel regularly compelled to redo the syntax, there's something wrong with it. "Many" means balkanization, which is another problem. (D has some carefully designed points which discourage balkanization.)
It goes back to too many parentheses. So I'm only half joking about it.
Well, "is" is quite the load-bearing word in those claims. I think it is kinda true in the sense that lots of things can be represented as OOP. A great example is how Scala represents ML-style ADTs as sealed class hierarchies.
If you are asking what level of abstraction is used by Forth, then the answer is: 2nd. It directly exposes stack, memory, and machine code and provides a thin wrapper on top of that. Developers can build higher levels of abstraction then, of course. It's possible to do an object-oriented programming in machine code, because compiler can.
The trickier question is: What level is Lisp? Is it assembler for Lisp machine (because of opcodes like car, cdr, etc.)? Or, maybe, it is a 4GL because of it advanced meta-programming possibilities?
That does not describe all Forths. A Forth running on a Forth CPU would certainly be a 2nd level, but depending on how the Forth is implemented and designed, it could easily be seen as a 3rd level. Arguably, Forth is (can be) more abstract than C and almost a quasi-FP.
It's possible to implement high-level abstraction in machine code, or compile a high-level language into low level machine code, however it's not possible to hide low-level, machine dependent details in a low level language. It's not possible to hide machine opcodes in machine code, or CPU registers in assembler, or stack in Forth, so a developer must learn these things and deal with them, which makes the development process slower.
So, it's expected to see an order of magnitude improvement in development speed, on average, when jumping from machine code to asm, or from asm to a 3rd level language. Forth doesn't improve speed of development by an order of magnitude comparing to asm, because of the steep learning curve and low level stack management. I tried to learn Forth multiple times, but still cannot program in Forth freely, which makes it unique among 20+ other programming languages I know.
IMHO Forth has a longer learning curve than other languages and requires a significant mental model shift if you are use to conventional languages.
It does not suit everyone. It seems more like learning a new human language with a large number of words.
Chuck Moore created Forth to improves his productivity versus conventional tools and there is some anecdotal evidence that it worked for him and those that worked with him, particularly where direct hardware control on new hardware could be interrogated and validated via the interpreter rather than an Edit/Assemble/Link/Test loop.
Which layer is an embedded Lua script controlling execution of a Java program running on the JVM within Rosetta2 and then getting translated into ARM microcode within the M1 CPU?
I don't know about layer, but according to this book[1], it is a fourth-generation language. I've never associated it with RPG and FoxPro, but I guess you learn something new every day.
This is general problem with modern public founded education system - it prizes memorization of repetitive schemes - as it's easy to check on the mass scale who is able to memorize these schemes. It's several magnitudes harder to check who really is a material for a scientist among same large population of candidates. I was educated in top university in my Central European country and to this day cannot shake off memories of people who passed calculus exams by memorizing solutions of integrals instead of understanding how to solve them. It was a general scheme - mediocre students interested only in "getting paper" aka diploma were passing exams mostly flawless some of them even get scholarships[sic(k)!] while people interested in actually understanding material and doing projects by their own (most students were making projects in groups and changing only minor details and teachers were pretending they do not see that) were struggling within that system. As the system has memorization without understanding and cheating as a fundamental of it's construction and people who resist following that pathological scheme were simply penalized. Attempts to rationalize with academic teachers in many cases resulted in absurd remarks of "everyone have equal requirements for passing classes". Puke inducing every time I think about that.
0. Federal Reserve creating inflation of US Dollar artificially pumping stock market up and up since 1987. Every stockmarket crash only intensifying peace of currency printing.