In the 90s, as a working C programmer, I would make it a practice to read thru man 2 and man 3 from time to time, as well as to read all the options for gcc. The man pages were pretty good at covering the differences, if not all the implications of the differences.
And to this day I strive to minimize the dependencies of my software - any dependency you have is an essentially unbounded cost on the maintenance side. I used to sign up for the security mailing list for all the dependencies, and would expect to have to follow all the drama and politics for the depenencies in order to have context on when to upgrade and when not too. With Go I don't do that, but with Python I sometimes still do. And I've been reading the main R support email list for ever. I still think that a periodic reading of lwn.net is an essential part of being a responsible programmer.
I will also say, reading the article reminds me of the terror of strings in C without a good library and set of conventions, and why Go (and presumably all other modern languages) is so much more relaxing to use. The place I worked have everything length prefixed and by the end of the 90s all the network parsing was code generated from protocol description files, to remove the need for manually writing string copying code.
I was somewhat dismissive, but I agree that this way of thinking about dependencies is the right approach for systems programming. And it is fair to expect that users will read the manual in detail for any tool or library they adopt in the contexts where C is used.
It's just a bit frustrating to deal with so many names that are hard to understand and remember. C-style naming forces you to refer to the docs more often, and the docs are usually more sparse and less accessible than in other ecosystems. Man pages are relatively robust and they were a delight back in the day, but they have not been the gold standard for decades, and the documentation conventions for third-party libraries tend to be quite weak.
A distressing number of softwar engineers have overly accurate memories and don't notice when things become excessively cryptic or arcane.
However the implementations being much more open source now means a lot of bad documentation can be overcome with code reading or, if needed, stepping thru the code with a debugger. Wrong documentation is still expensive. I have a bitter taste in my mouth from integrating with OpenTelemetry Go libraries. It seems to be sorted now in 1.27and q
28 but 1.24 and for a few versions the docs were wrong, the examples were not transferable, and it took 5x the time it should have.
> The place I worked have everything length prefixed and by the end of the 90s all the network parsing was code generated from protocol description files, to remove the need for manually writing string copying code
Would you expound on this please? I've seen people reserve the first element as the size of the buffer but then, say a char buffer, has a size limit based on the size of char itself. What advice would you give yourself if you were just becoming a senior developer today? I do embedded myself
For protocols, you might be worrying about the bytes and use a 1 2 4 or 8 byte len, or some complicated variable length integer like ASN.1 or something homegrown (but actually, with the slower networks, if you have a 2M msg, you'd split it into 64k that fit into your 16-bit length prefixed protocol and stream it), but for the parsed data, probably you'd do size_t or ssize_t, byte[], for portability. The standard parsing API would pass in (buf, buf_len) (or (pos, buf, buf_len), and the standard struct would be (name_len size_t , name Byte *) (where Byte was typedef to unsigned char, so you wouldn't pick up signed char by mistake).
And to this day I strive to minimize the dependencies of my software - any dependency you have is an essentially unbounded cost on the maintenance side. I used to sign up for the security mailing list for all the dependencies, and would expect to have to follow all the drama and politics for the depenencies in order to have context on when to upgrade and when not too. With Go I don't do that, but with Python I sometimes still do. And I've been reading the main R support email list for ever. I still think that a periodic reading of lwn.net is an essential part of being a responsible programmer.
I will also say, reading the article reminds me of the terror of strings in C without a good library and set of conventions, and why Go (and presumably all other modern languages) is so much more relaxing to use. The place I worked have everything length prefixed and by the end of the 90s all the network parsing was code generated from protocol description files, to remove the need for manually writing string copying code.