Do you want good data, or useable data?
Got a question I haven’t seen for a while. Here it is, with my answer (and a little bit of additional explanation) to follow:
Thanks for the site! You do not post the altitude and temperature of your results (unless I missed that). Can you let us know what your reference points are? Also, what effect would altitude and temperature variation have on your results?
Here’s the answer I gave:
Well, it’s been a while since anyone asked about that … thanks!
We did discuss this early on, and decided pretty quickly that while both of those would indeed have an effect (as would the changes in barometric pressure), that it would be so small as to not matter for the degree of accuracy of our testing equipment and the limited number of rounds tested. If you were trying to get really good data, everything would have to be much more rigorous and controlled … and we would never ever have gotten the data that we did. So as I remind people: consider the results to be *indicative*, not definitive. In other words, don’t try to read too much into variances of a few feet-per-second, or convince yourself that such minor differences really matter.
Hope that helps to give a little perspective.
Oh, and I can answer one of your questions: almost all the testing was done at an elevation of approximately 744′ above sea level, according to commercial GPS systems.
I think that’s pretty clear, but I want to emphasize one part of it: that if we had set out to provide really rigorous and statistically-significant data, the chances are that we would never have even gotten past the first test sequence. And that means there would be NO BBTI.
As it is, we have tested something in excess of 25,000 rounds over the last 7 years. At a personal cost of more than $50,000. And that doesn’t begin to include the amount of labor which has gone into the project. To get really solid data which was statistically significant, we probably would have needed to do at LEAST three or four times as many rounds fired. With three or four times the amount of time testing. And crunching the data. And cost out of pocket.
Which would have meant that we probably would never have gotten through a single test sequence.
So it’s a matter of perspective. Do you want some data which is reasonably solid, and gives a pretty good idea of what is going on with different cartridges over different barrel lengths? Or do you want very accurate, high rigorous data which would never have been produced?
Hmm … let me think about that … 😉
Jim Downey
PS: We haven’t forgotten about the .45 Super/.450 SMC tests — it’s just been a busy summer. Look for it soon.
1 Comment »
Leave a Reply
-
Archives
- May 2023 (4)
- April 2023 (1)
- August 2022 (1)
- July 2022 (3)
- May 2021 (1)
- March 2021 (1)
- December 2020 (1)
- September 2020 (4)
- August 2020 (10)
- July 2020 (1)
- February 2020 (1)
- December 2019 (1)
-
Categories
- .22
- .223
- .22WMR
- .25 ACP
- .30 carbine
- .32 ACP
- .32 H&R
- .327 Federal Magnum
- .357 Magnum
- .357 SIG
- .38 Special
- .380 ACP
- .40 S&W
- .41 Magnum
- .44 Magnum
- .44 Special
- .45 ACP
- .45 Colt
- .45 Super
- .450 SMC
- .460 Rowland
- 10mm
- 6.5 Swedish
- 9mm Luger (9×19)
- 9mm Mak
- 9mm Ultra
- Anecdotes
- black powder
- Boberg Arms
- Data
- Discussion.
- General Procedures
- historic rifles
- Links
- Revolver
- Shotgun ballistics
- Uncategorized
-
RSS
Entries RSS
Comments RSS
[…] an email which is another aspect of the problem I wrote about recently. The author was asking that we get more fine-grained in our data, by making measurements of barrel […]
Pingback by The illusion of precision. « Ballistics by the inch | September 6, 2015 |