Ok so I've never taken statistics, but I'm pretty sure this is a statistics problem. My professor for my operating systems course curves test grades based on a standard deviation. If this is the template for grades:

Where u is the average score, pt is my score, and o is the standard deviation. The average score was a 57, standard deviation was a 12.3, what score would I have needed to get an A?

If I read this right, wouldn't it be a 69.3?

pt > u + 1.0 o A u < pt < u + 1.0 o B u - 1.0 o < pt < u C u - 2.0 o < pt < u - 1.0 o D u - 2.0 o < pt F |

Where u is the average score, pt is my score, and o is the standard deviation. The average score was a 57, standard deviation was a 12.3, what score would I have needed to get an A?

If I read this right, wouldn't it be a 69.3?

Yes.

As a side note, this expects that 15.84% of the test-takers will get an A, and 2.27% will get an F.

As a side note, this expects that 15.84% of the test-takers will get an A, and 2.27% will get an F.

Funny thing about education these days (learn for the exam, forget after), I remember doing stats in one of my math classes, and I remember all of those terms you used, but I have absolutely no idea what they really mean or what to do with them.

Funny thing about education these days (learn for the exam, forget after) |

This is unfortunately true for a lot of classes, even ones I found interesting. If it's outside my major, then I tend to not remember it. It's just knowledge that doesn't get used after the class is over.

2.27% will get an F |

How does this work? If the average is a 57%, it seems as if there would be more people who scored < 60%

Last edited on

How does this work? If the average is a 57%, it seems as if there would be more people who scored < 60% |

If somehow the distribution of scores for your class was uniform, then all grades will be more or less equally likely.

The numbers I quoted follow from the implicit assumption made when someone uses that curve, that scores are normally distributed; i.e. that scores closer to the median are more likely.

The F range is farther (2 sigmas minimum) from the median than the A range (1 sigma minimum), so it's less likely.

Last edited on

Topic archived. No new replies allowed.