Recorded live on twitch, GET IN https://twitch.tv/ThePrimeagenBecome a backend engineer. Its my favorite sitehttps://boot.dev/?promo=PRIMEYTThis is also the...
A person I watch frequently gave me this gem.
Please forgive the annoying thumbnail.
I’ve seen this sentiment in a few places recently, and as a software engineer with 20 years of experience I can say with 100% certainty that this is a terrible (and dangerous) trend when it comes to programming.
Undergrads should absolutely be learning how memory works, how to allocate it, when to free it, and what issues you can get into when you don’t do it properly. Sheltering them from such things will lead to a more ignorant generation of developers, which will lead to a lot of headaches down the road, for everyone.
The programming field is going through what math has been going through for ages. Many people don’t want to learn how things work because they can use a calculator or software to do basic maths. But then when it comes to actually understanding what’s going on, there becomes a big loss.
You can do all that with Rust. Maybe C/C++ is good to teach if the professor explains why they should almost never be used, but IMO it would probably be better to just educate them using a well designed language like Rust so that they have the experience for a career.
Manual memory management has about as much applicability these days as assembler did back when I was doing my degree. It should be covered as part of learning How Things Work Under the Hood, it’s still needed for some kinds of specialist work, but many—perhaps even the majority of—people writing code will never need to deal with it in the real world, because the languages in which most code is written these days all have some form of memory management.
There is still an enormous amount of C++ code still in use (and other unsafe languages for that matter). It is still an actively developed and used language, and likely will be for many years to come. Having at least a basic grounding in it is very valuable element of flexibility for any potential programmer, as well as an understanding of the underlying concepts.
Yes, there is C++ code still being written, and it’s a reasonable choice for some lower-level and complex code , but it’s a much smaller percentage of the whole than it was even ten years ago. Web stack stuff tends to be written in memory-managed languages, and it probably accounts for more lines of new code than anything else these days (note that I didn’t specify good code). You can have a whole career without ever getting down into the weeds.
Similarly, assembler still had some practical applications in games and video codecs when I got out of school. These days, I wouldn’t expect to see hand-written assembler outside of an OS kernel or other specialized low-level use. It’s still not gone, but it’s been gradually going for many years now. Languages without memory management likely never will completely disappear, and they have massive inertia because of the sheer number of C utility libraries lying around, but they’re gradually becoming more marginalized.
What it comes down to is: understanding how memory works is useful and broadening for someone who wants to program, but it’s no longer necessary even for a professional. (I think we’re mostly in agreement on everything except relative importance, in other words.)
Memory unsafe languages will always have value in applications where speed and performance mean anything. Embedded programming and video games are the obvious examples, but pretty much any application taken far enough will eventually demand the performance benefits of memory unsafe languages. Some even require writing assembly directly. Contrary to common dogma, the compiler isn’t always best.
You don’t need to understand the details of how memory is allocated to understand that taking up too much space is bad, and that there’s often a tradeoff between programmer time, machine execution time, and memory allocated, though.
I’ve seen this sentiment in a few places recently, and as a software engineer with 20 years of experience I can say with 100% certainty that this is a terrible (and dangerous) trend when it comes to programming.
Undergrads should absolutely be learning how memory works, how to allocate it, when to free it, and what issues you can get into when you don’t do it properly. Sheltering them from such things will lead to a more ignorant generation of developers, which will lead to a lot of headaches down the road, for everyone.
This is almost certainly not intentional. The AI just can’t differentiate between unsafe as in NSFW and unsafe as in manual memory management.
The programming field is going through what math has been going through for ages. Many people don’t want to learn how things work because they can use a calculator or software to do basic maths. But then when it comes to actually understanding what’s going on, there becomes a big loss.
Agreed. I’ve been seeing the trend myself and it’s a damn shame. Ignorance isn’t a reason to shelter others
Good thing we have Rust.
When I clicked the thread I thought this was a joke and am now experiencing a degree of horror.
You can do all that with Rust. Maybe C/C++ is good to teach if the professor explains why they should almost never be used, but IMO it would probably be better to just educate them using a well designed language like Rust so that they have the experience for a career.
Manual memory management has about as much applicability these days as assembler did back when I was doing my degree. It should be covered as part of learning How Things Work Under the Hood, it’s still needed for some kinds of specialist work, but many—perhaps even the majority of—people writing code will never need to deal with it in the real world, because the languages in which most code is written these days all have some form of memory management.
There is still an enormous amount of C++ code still in use (and other unsafe languages for that matter). It is still an actively developed and used language, and likely will be for many years to come. Having at least a basic grounding in it is very valuable element of flexibility for any potential programmer, as well as an understanding of the underlying concepts.
Yes, there is C++ code still being written, and it’s a reasonable choice for some lower-level and complex code , but it’s a much smaller percentage of the whole than it was even ten years ago. Web stack stuff tends to be written in memory-managed languages, and it probably accounts for more lines of new code than anything else these days (note that I didn’t specify good code). You can have a whole career without ever getting down into the weeds.
Similarly, assembler still had some practical applications in games and video codecs when I got out of school. These days, I wouldn’t expect to see hand-written assembler outside of an OS kernel or other specialized low-level use. It’s still not gone, but it’s been gradually going for many years now. Languages without memory management likely never will completely disappear, and they have massive inertia because of the sheer number of C utility libraries lying around, but they’re gradually becoming more marginalized.
What it comes down to is: understanding how memory works is useful and broadening for someone who wants to program, but it’s no longer necessary even for a professional. (I think we’re mostly in agreement on everything except relative importance, in other words.)
Memory unsafe languages will always have value in applications where speed and performance mean anything. Embedded programming and video games are the obvious examples, but pretty much any application taken far enough will eventually demand the performance benefits of memory unsafe languages. Some even require writing assembly directly. Contrary to common dogma, the compiler isn’t always best.
Yeah, but that doesn’t mean you should allocate a billion arrays just because the memory is managed for you. It’s still inefficient.
You don’t need to understand the details of how memory is allocated to understand that taking up too much space is bad, and that there’s often a tradeoff between programmer time, machine execution time, and memory allocated, though.