by Anonymous Coward writes:
on Thursday October 11, 2012 @01:04PM (#41620621)
simply good use of pointers-to-pointers etc. For example, I've seen too many people who delete a singly-linked list entry by keeping track of the "prev" entry, and then to delete the entry, doing something like
if (prev) prev->next = entry->next; else list_head = entry->next;
and whenever I see code like that, I just go "This person doesn't understand pointers". And it's sadly quite common.
People who understand pointers just use a "pointer to the entry pointer", and initialize that with the address of the list_head. And then as they traverse the list, they can remove the entry without using any conditionals, by just doing a "*pp = entry->next".
I'm going to have to disgree with Linus on that one. When I'm coding in a mixed group of people that includes old farts and interns and the performance isn't that critical, I'll do the former over the latter to insure that everyone in the group will understand it easily and will have less chance of breaking it if they change it. It can mean the difference between code that is robust and code that is fragile when it's being worked on, not just when it's running.
I believe that his pointer example is more a matter of personal style. I can easily see how doing away with the conditions will make for more efficient code, but in many cases, the preference he cites might also make the code a little more obfuscated. However, even that's nothing that a single-line comment wouldn't fix, making sure that whoever is reading the code fully realizes the intent behind it. I think perfectly valid arguments can be made for doing it either way.
Personally, I would classify this as a type of pointer design pattern that is ideal with linked list data structures, but I would not suggest that a person who doesn't regularly use every clever design pattern for pointers at every opportunity is necessarily less knowledgeable than one who does. In many cases, in fact, a person in the latter category may even be arguably guilty of simply trying to show off, rather than actually get whatever needed doing done.
I'm definitely a big fan of readability in code. Of course you have to know your intended audience. People reading kernel code are not the same audience as, I don't know, people writing a financial application in a high level language.
With the work I do I am able to favour readability over efficiency (then optimize if required.. no premature optimization). It makes maintenance so much easier (and let's face it, code spends most of its life being maintained). Code is already way harder to read than write a
I never said Linus was wrong. In fact I was agreeing it is correct in the kernel. A kernel dev would have no problem reading and understanding his example. Your average joe coder might stumble over it.
This is simply a smaller-scale of the same kind of improvement that object-oriented is.
What? That doesn't make any sense in the context of replying to my comment.
Average Joe coder must be educated, not kept in the darkness.
No. Not really.
Not all developers need to be able to read advanced pointer usage and tricks for the sake of tricks is stupid. There must be a measurable performance difference or it has to be easily readable by the group developing. If it isn't easily readable and there is no performance justification then education isn't the issue, it's developer ego, "Well I can read it."
Yeah, good for you. But I don't care about that. I care about whoever ends up having to maintain your difficult to read code. Prematu
Won't CPU branch prediction make the if based version as fast as the pointer to pointer version? If you have a thousand element list, then there's a only 1 in a thousand chance that you're deleting the first element all the other times you head down the other side of the conditional.
Processors and memory are cheap, a developer's time isn't.
And that attitude in application developers is the reason my 1 GB RAM, dual core laptop runs like sludge.
We've bought a few externally supplied programmes at my company recently, all promising to be within our minimum requirements for hardware. But try to run even a handful of them at once and the whole thing crawls to a halt. The developers on each of them were undoubtedly saying "processors and RAM is cheap, we don't need to optimise!". Bastards the lot of them.
How many programmer hours do you suppose it'd take to optimise the software you're running? Those hours aren't free, and *in general* highly optimized code is more work to maintain - which will keep software costs up.
If you care about performance, invest in performance.
Bah. Software engineering just plain stinks most of the time. PC software is unreliable and slow, and it bloats faster than hardware advances, so computers today don't really let people get things done any faster (excluding, of course, things like media encoding).
There's hardly any feeling of ownership in software made by big companies or groups, so there's little incentive to take pride in one's work, so programmers are lazy, and probably under pressure from bean counters and unreasonable managers and cu
The 1GB laptop is a company laptop that I have no control over. Our company has something like 15,000 laptops and desktops out there in frontline use. Buying 30,000 2GB RAM modules at the best internet price I can find (£12.98) would cost £390,000. Then there's the labour costs of fitting new RAM modules to 15,000 machines at over 700 different locations. Programming time might not be free, but then nor is half a million pounds.
So yeah "just buy more RAM derp derp" is not generally a helpful res
OK, tell us the rest of the math. How many of those laptops are using the software you mentioned? How much is the license for? Per year? How many other programs do those laptops run? Are you willing to pay software money to improve all of them?
All of them are running dozens of programmes, most of which developed out of house (most of which off-the-shelf). Not necessarily the same dozen on each machine, obviously, but they're all heavily used- we're in a very technology-heavy industry.
If every single one of the development teams for every single one of the dozens of different programmes in use have the same attitude ("Who cares if it's hogging RAM and churning CPU cycles, they're both cheap these days!"). then every single one of them runs more po
If every single one of the development teams for every single one of the dozens of different programmes in use have the same attitude ("Who cares if it's hogging RAM and churning CPU cycles, they're both cheap these days!"). then every single one of them runs more poorly than it might, forcing us to either upgrade the hardware unnecessarily at great expense, or accept a god awful user experience. That's not a good situation just because each developing company wants to cut corners and shave a few quid off the costs.
Twice the RAM on your current systems. And that's just the bottom of the line, now. I think it isn't reasonable for you to complain about brand new software performance on a poorly stocked system -- and your systems are poorly stocked. Hell, you are *at* the minimum for running windows 7 (or 8
I think perhaps the point that he was making about designing with pointers wasn't fully appreciated by everyone, because he didn't really spell it out. It's not just a matter of preferred coding style nor clarity, far from it.
The unconditional pointer update approach is atomic by virtue of the update being performed in a single memory write cycle, whereas the longer conditional form is clearly not atomic, and to make it atomic would require using locks. (There's a bit more to it than that because you st
You've actually made a good point... atomicity is actually a *VERY* good reason to not use the conditional form, but not every application requires that. It's certainly not generally going to be the case that atomicity is a requirement, and often when it is, it would might be more practical to use an explicit mutex on your data to ensure nothing else touches it while you're using it. Even when mutexes are not practical (and I know that can very easily be the case), however, and atomicity is still requir
The unconditional pointer update approach is by no means atomic unless you use memory barriers or atomic instructions. There is a reason C++11 added <atomic>.
I've been programming in C for nearly 30 years... when I saw his counter-example, I still had to pause for just a second to think about it. Yes, I see what he did, and I've even used that pattern myself in software that I've written. There are quite legitimate reasons to use that form that are applicable to system programming, so I don't object to it over the conditional form (I actually even prefer it, in many cases), but a simple one-line comment which clarifies the intent would definitely be preferre
I'm going to have to disgree with Linus on that one. When I'm coding in a mixed group of people that includes old farts and interns and the performance isn't that critical, I'll do the former over the latter...
You're not disagreeing with Linus' point here. You're referencing an entirely different scenario.
You: When performance isn't critical Linus: He's always working on the kernel, as are those whose pointer code he called 'sad' here. Performance is *always* critical with kernel code.
A debugged program is one for which you have not yet found the conditions
that make it fail. -- Jerry Ogdin
Not So Fast On The Pointers (Score:4, Interesting)
simply good use of pointers-to-pointers etc. For example, I've seen too many people who delete a singly-linked list entry by keeping track of the "prev" entry, and then to delete the entry, doing something like
if (prev)
prev->next = entry->next;
else
list_head = entry->next;
and whenever I see code like that, I just go "This person doesn't understand pointers". And it's sadly quite common.
People who understand pointers just use a "pointer to the entry pointer", and initialize that with the address of the list_head. And then as they traverse the list, they can remove the entry without using any conditionals, by just doing a "*pp = entry->next".
I'm going to have to disgree with Linus on that one. When I'm coding in a mixed group of people that includes old farts and interns and the performance isn't that critical, I'll do the former over the latter to insure that everyone in the group will understand it easily and will have less chance of breaking it if they change it. It can mean the difference between code that is robust and code that is fragile when it's being worked on, not just when it's running.
Re:Not So Fast On The Pointers (Score:5, Interesting)
I believe that his pointer example is more a matter of personal style. I can easily see how doing away with the conditions will make for more efficient code, but in many cases, the preference he cites might also make the code a little more obfuscated. However, even that's nothing that a single-line comment wouldn't fix, making sure that whoever is reading the code fully realizes the intent behind it. I think perfectly valid arguments can be made for doing it either way.
Personally, I would classify this as a type of pointer design pattern that is ideal with linked list data structures, but I would not suggest that a person who doesn't regularly use every clever design pattern for pointers at every opportunity is necessarily less knowledgeable than one who does. In many cases, in fact, a person in the latter category may even be arguably guilty of simply trying to show off, rather than actually get whatever needed doing done.
Re: (Score:2)
With the work I do I am able to favour readability over efficiency (then optimize if required.. no premature optimization). It makes maintenance so much easier (and let's face it, code spends most of its life being maintained). Code is already way harder to read than write a
Re: (Score:2)
Sorry, but I think Linus is right here.
I never said Linus was wrong. In fact I was agreeing it is correct in the kernel. A kernel dev would have no problem reading and understanding his example. Your average joe coder might stumble over it.
This is simply a smaller-scale of the same kind of improvement that object-oriented is.
What? That doesn't make any sense in the context of replying to my comment.
Re: (Score:2)
Average Joe coder must be educated, not kept in the darkness.
No. Not really.
Not all developers need to be able to read advanced pointer usage and tricks for the sake of tricks is stupid. There must be a measurable performance difference or it has to be easily readable by the group developing. If it isn't easily readable and there is no performance justification then education isn't the issue, it's developer ego, "Well I can read it."
Yeah, good for you. But I don't care about that. I care about whoever ends up having to maintain your difficult to read code. Prematu
Re: (Score:1)
Best sig of the thread: (Score:2)
I nominate yours as the best sig of all this thread.
Re:Not So Fast On The Pointers (Score:4, Insightful)
Processors and memory are cheap, a developer's time isn't.
And that attitude in application developers is the reason my 1 GB RAM, dual core laptop runs like sludge.
We've bought a few externally supplied programmes at my company recently, all promising to be within our minimum requirements for hardware. But try to run even a handful of them at once and the whole thing crawls to a halt. The developers on each of them were undoubtedly saying "processors and RAM is cheap, we don't need to optimise!". Bastards the lot of them.
Re: (Score:2)
So the apps run and you're complaining. When you could buy 4G of RAM for about $60.
Guh. Make that 8G for $30.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820104339 [newegg.com]
How many programmer hours do you suppose it'd take to optimise the software you're running? Those hours aren't free, and *in general* highly optimized code is more work to maintain - which will keep software costs up.
If you care about performance, invest in performance.
Re: (Score:2)
Bah. Software engineering just plain stinks most of the time. PC software is unreliable and slow, and it bloats faster than hardware advances, so computers today don't really let people get things done any faster (excluding, of course, things like media encoding).
There's hardly any feeling of ownership in software made by big companies or groups, so there's little incentive to take pride in one's work, so programmers are lazy, and probably under pressure from bean counters and unreasonable managers and cu
Re: (Score:2)
Maybe I've been using computers longer than you, or maybe not. But your experience has been the opposite of mine (over the past 30 years).
Computers are getting ridiculously faster, cheaper, smaller. Software has failed to keep up since somewhere around 2000, in my experience.
Re: (Score:2)
The 1GB laptop is a company laptop that I have no control over. Our company has something like 15,000 laptops and desktops out there in frontline use. Buying 30,000 2GB RAM modules at the best internet price I can find (£12.98) would cost £390,000. Then there's the labour costs of fitting new RAM modules to 15,000 machines at over 700 different locations. Programming time might not be free, but then nor is half a million pounds.
So yeah "just buy more RAM derp derp" is not generally a helpful res
Re: (Score:2)
OK, tell us the rest of the math. How many of those laptops are using the software you mentioned? How much is the license for? Per year? How many other programs do those laptops run? Are you willing to pay software money to improve all of them?
Or would the #400K be a bargain?
Re: (Score:2)
All of them are running dozens of programmes, most of which developed out of house (most of which off-the-shelf). Not necessarily the same dozen on each machine, obviously, but they're all heavily used- we're in a very technology-heavy industry.
If every single one of the development teams for every single one of the dozens of different programmes in use have the same attitude ("Who cares if it's hogging RAM and churning CPU cycles, they're both cheap these days!"). then every single one of them runs more po
Re: (Score:2)
...
If every single one of the development teams for every single one of the dozens of different programmes in use have the same attitude ("Who cares if it's hogging RAM and churning CPU cycles, they're both cheap these days!"). then every single one of them runs more poorly than it might, forcing us to either upgrade the hardware unnecessarily at great expense, or accept a god awful user experience. That's not a good situation just because each developing company wants to cut corners and shave a few quid off the costs.
But it's totally reasonable. The bottom of the line DELL ships with 2G of RAM:
http://configure.us.dell.com/dellstore/config.aspx?oc=svctbpd1&model_id=vostro-260&c=us&l=en&s=soho&cs=ussoho1& [dell.com]
Twice the RAM on your current systems. And that's just the bottom of the line, now. I think it isn't reasonable for you to complain about brand new software performance on a poorly stocked system -- and your systems are poorly stocked. Hell, you are *at* the minimum for running windows 7 (or 8
Key issue in kernels, atomicity (Score:2)
I think perhaps the point that he was making about designing with pointers wasn't fully appreciated by everyone, because he didn't really spell it out. It's not just a matter of preferred coding style nor clarity, far from it.
The unconditional pointer update approach is atomic by virtue of the update being performed in a single memory write cycle, whereas the longer conditional form is clearly not atomic, and to make it atomic would require using locks. (There's a bit more to it than that because you st
Re: (Score:2)
You've actually made a good point... atomicity is actually a *VERY* good reason to not use the conditional form, but not every application requires that. It's certainly not generally going to be the case that atomicity is a requirement, and often when it is, it would might be more practical to use an explicit mutex on your data to ensure nothing else touches it while you're using it. Even when mutexes are not practical (and I know that can very easily be the case), however, and atomicity is still requir
Re: (Score:2)
The unconditional pointer update approach is by no means atomic unless you use memory barriers or atomic instructions. There is a reason C++11 added <atomic>.
Re: (Score:3)
I've been programming in C for nearly 30 years... when I saw his counter-example, I still had to pause for just a second to think about it. Yes, I see what he did, and I've even used that pattern myself in software that I've written. There are quite legitimate reasons to use that form that are applicable to system programming, so I don't object to it over the conditional form (I actually even prefer it, in many cases), but a simple one-line comment which clarifies the intent would definitely be preferre
Re: (Score:1)
Re: (Score:2)
I think this is what he prefers to see in code:
struct node {
int x;
struct node *next;
};
struct node *list_head;
int main(int argc, char *argv[]) {
delete_item( &list_head );
}
void delete_item(node** pp, int i) {
for ( node* entry = *pp; entry; entry = entry->next ) {
if (entry->x == i) {
Re: (Score:2)
Ugh, yes that's right. I was trying to keep it in his words and forgot to change entry.
Anyway, I'm sure he's referring to the extra conditionals to check prev/current, not for determining the correct entry.
Re: (Score:2)
I think you could also add this after the if clause:
else {
pp = &(entry->next);
}
Re: (Score:1)
I'm going to have to disgree with Linus on that one. When I'm coding in a mixed group of people that includes old farts and interns and the performance isn't that critical, I'll do the former over the latter...
You're not disagreeing with Linus' point here. You're referencing an entirely different scenario.
You: When performance isn't critical
Linus: He's always working on the kernel, as are those whose pointer code he called 'sad' here. Performance is *always* critical with kernel code.