Okay, so the part IX was a bit facetitious, but it has been discussed a bit here in the past few years. I've been reading these posts for quite a while, yet I remain amazed by the wide variance I've seen regarding the topic. Some say they see accuracy fall off in as little as 40 rounds, while others claim to be 200+ rounds without cleaning the bore and seeing no degradation in accuracy. Can the curve really be so flat?
I'm guessing that it is indeed a flat normal curve. Just like some fit folks can run 2 miles without stopping while other similarly fit folks can run marathons without walking, rifle barrels also have a large standard deviation when it comes to barrel fouling and accuracy. All rifles and barrels are different, and thorough testing of my own rifle is the only way to fully ascertain what I'm working with, and regrettably, I've yet to perform such a test.
In all of my load development I have worked with a round robin method, shooting groups in a way that spreads the fouling amongst the several test loads which do not favor the first loads over later ones. The problem is, I have never fully determined how far to go without negatively impacting all of the test loads. So I've decided to test my rifle before further development procedures.
I intend to fire multiple 5-shot groups with one consistent load and determine how many rounds begin to affect accuracy. Beginning with a clean barrel, I expect to see the first one, or two groups spread out, as the initial fouling occurs; subsequent groups should shrink as we get into "the zone", and remain rather uniform, at least as well as my shooting from a rest will allow... And as fouling becomes a factor, group sizes should increase in a measurable way, given a large enough sample size. My intent is to determine the maximum allowable number of rounds between cleanings in order to set an optimum number of test loads to evaluate between bore cleanings.
So, what does the community think? I'm just trying to maximize my range time in testing load data. How does our group go about determining how many rounds is too much in comparing various data sets?
I'm guessing that it is indeed a flat normal curve. Just like some fit folks can run 2 miles without stopping while other similarly fit folks can run marathons without walking, rifle barrels also have a large standard deviation when it comes to barrel fouling and accuracy. All rifles and barrels are different, and thorough testing of my own rifle is the only way to fully ascertain what I'm working with, and regrettably, I've yet to perform such a test.
In all of my load development I have worked with a round robin method, shooting groups in a way that spreads the fouling amongst the several test loads which do not favor the first loads over later ones. The problem is, I have never fully determined how far to go without negatively impacting all of the test loads. So I've decided to test my rifle before further development procedures.
I intend to fire multiple 5-shot groups with one consistent load and determine how many rounds begin to affect accuracy. Beginning with a clean barrel, I expect to see the first one, or two groups spread out, as the initial fouling occurs; subsequent groups should shrink as we get into "the zone", and remain rather uniform, at least as well as my shooting from a rest will allow... And as fouling becomes a factor, group sizes should increase in a measurable way, given a large enough sample size. My intent is to determine the maximum allowable number of rounds between cleanings in order to set an optimum number of test loads to evaluate between bore cleanings.
So, what does the community think? I'm just trying to maximize my range time in testing load data. How does our group go about determining how many rounds is too much in comparing various data sets?