And what a job of it he does.
The motivation here was to test Modern Spartan Systems line of gun cleaning kit against established known quantities with proven performance. Their promise of no foul smell, lack of toxicity and some of the other claims they made caused me to get curious enough to do a Pepsi challenge for their whole cleaning system. This includes Accuracy Oil; which claims to increase velocity & cut group size & extend barrel life. It also includes their Carbon Destroyer and Copper/Lead Destroyer and their Carbon Destroyer.
I’ve already started long term testing of their Accuracy Oil’s claims at longer barrel life and improvements in velocity, group size and consistency. Those experiments are continuing and I’ve built an impressive data set so far with more coming in every week. In the meantime, the fundamental ability of the fouling removal products to perform like they say it will had not yet been established by any kind of usefully conducted experiment I could find. So, I’m doing it. I’ve already put the Carbon Destroyer up to the Pepsi challenge and it flat works. It’s pleasant enough to use and worked like a charm on everything from revolvers to pistols to high power modern rifles to black powder cartridge rifles. The way it worked on our set of Trapdoor Springfields was terrific. What about the big one though…COPPER!?! Let’s git’er done.
I’ve got enough barrels around with sufficient fouling, including some I’m entirely willing to destroy, to give a good test of effectiveness and side-effects. In the spirit of experimentation I set up the first round of testing with 3 barrels:
- Stock Glock 21 barrel. 1000’s of rounds since being cleaned.
- Savage 10 .308 24″ heavy barrel, >500 rounds since cleaning.
- Black Hole Weapons 26″ .223 barrel > 200 rounds since cleaning.
Cliff’s Notes: In short, MSS’s Copper/Lead Destroyer is very effective. Zero question about that.
More detailed findings and experimental procedure:
C/L-D not as strong as Sweet’s by a mile nor is it as strong as Wipe-Out as a copper remover but it’s a lot more pleasant to use than Sweet’s and less messy than Wipe-Out. This is about removing copper and copper fouling is hard to remove well without damaging the barrel steel. You either get mechanical action which is by definition damaging to the bore or you get chemical action which may be damaging to the bore. Bore damage can be dependent on the length of time of exposure to chemical agents and some of them are really nasty for everyone involved.
To start I took a G21 barrel that had been belled just in front of the chamber by a squib. It had previously had Carbon Destroyer run through it and then was soaked overnight (26 hours) in Copper/Lead Destroyer, hosed out and stored. I ran some Wipe-Out into it and gave it 15 minutes to soak and pushed a patch through. Zero color change on the patch. Then I ran some Sweet’s in it and let that soak for 5 minutes and pushed a patch through. Zero color change on the patch.
Ok, that’s the null result I was expecting. The barrel was clearly clean of copper to begin with but you don’t know the state of fouling before the 26 hour soak. Could have been a lot, could have been a little, could have been none for all you know, right?
Now to find the more interesting results. I took a factory Savage .308 Win barrel that I’d abused and not cleaned in literally years. It had at least a couple hundred rounds put through it before it got yanked and set aside. I started by running patch of Sweet’s through the barrel without running a brush through it, hoping that the carbon that stayed behind would protect some of the copper from the Sweet’s to serve as an indicator later. It came out with gooey gobs of blue on the patch with no soak at all, just applied and patched out. I immediately took the barrel outside and hosed it out for a solid couple minutes to keep the Sweet’s from finishing the job. I plugged the breech with a .45acp case and filled the bore with Copper/Lead Destroyer and gave it 2 hours to soak. After the soak I ran a patch through it a couple times (remember, no color change on the patches, C/L-D doesn’t do that) and then went and hosed it out. Now I needed to see if there was any copper still in there so I took the Wipe-Out and ran that in the barrel and gave it a 20 minute soak. After pushing a patch through what I found were traces of blue streaking on the patch and plenty of black and brown. Not much blue but enough to tell me that the carbon was in fact protecting the copper. There wasn’t enough copper coming out to make a good finish up to the experiment on that barrel so I reset the experiment by virtue of moving on to the .223 barrel.
The .223 barrel started with at least 200 rounds since the last even partial cleaning so it got a thorough carbon removal with Carbon Destroyer. When patches wrapped around a bore brush came out without any black or brown on them, I called that done. I put a fired case in the breech, closed the bolt and then filled the bore with Copper/Lead Destroyer and let it soak for 2 hours. Then I pushed a pair of patches through which came out not much different than they went in. Now to see if the C/L-D worked I ran a patch of sweet’s down the bore, gave it a solid 3 minutes to soak and pushed another patch though looking for color change and got NONE AT ALL. That was a null result I did not honestly expect. I expected to find some copper remaining, I mean Sweet’s is as aggressive as it gets. But no.
What’s that all mean? Leave the Copper/Lead Destroyer to soak a while and it works as thoroughly as Sweet’s or Wipe-Out. I really like using C/L-D way more than Sweet’s. I can’t even stand opening the bottle on that cat piss smelling Sweet’s. I actually really like Wipe-Out too and will continue to use it at the range because it’s super easy to deal with there. At home though, I think I’ve found my new cleaning product suite. All the chemicals I need are now finally not unpleasant.
Modern Spartan Systems – Copper/Lead Destroyer: No bad smell. A detergent-y smell similar to cold bluing solution is what it reminds me of most. The directions say you can leave in barrel safely for many hours, even overnight. I left it in a G21 barrel for 26 hours with no adverse affect noted. MUST use a carbon solvent prior to applying for it to be properly effective. Modern Spartan’s carbon remover works great. Getting C/L-D to stay wet in the barrel was another story. It dried quickly in my low humidity area. I eventually stuffed a fired case in the breech, stood the barrel up and filled the bore on rifles. On pistols it was easier to soak a narrow strip of paper towel in it and thread that down the bore and let it sit that way overnight. Directions say 3-5 minutes of soak. I got best results on heavy fouling after 2 hours. No color change on the patch so it’s a little hard to “know” when you’re done.
Wipe-Out: It’s got a smell but nothing like Sweet’s. Can leave in barrel overnight, no ammonia. It’s a foam that expands so some will end up in your action and it’ll probably drip out of the muzzle so, a little messy to use. Patch’s change color to blue if copper is present. Works on carbon and copper. Usually 15 minutes is more than sufficient as a soak time.
Sweet’s 7.62: Super strong ammonia smell. Do not leave in barrel longer than necessary, clean residue off skin and gun thoroughly immediately after use. Known to be hard on steel. Must use carbon remover prior for full effectiveness.
I have video and all that jazz but it’s not very interesting TV. It’s just me slowly, methodically and painfully boringly working out the surprisingly obvious. On the upside, MSS’s stuff works like a dream so far. I can officially endorse the Copper and Lead Destroyer and the Carbon Destroyer because I have proven beyond any doubt that they work as advertised.
Now about that Accuracy Oil….
So we have initial results. I’d like to thank you all for the views on my video.
We will be testing this die set more over the next year. This is out of my pocket and out of my own curiosity. I have the credit card bills and had the arguments with my wife to show for it. I must caution, because of some things people seem to have in their heads, that this isn’t ever going to make a 1″ gun into a .5″ gun. Anyone suggesting such a thing is either a fool or a liar. What you should be expecting is to reduce variability in your ammo which reduces things like flyers & SD’s. Effects on group size, maybe small ones should be expected as a normal effect of better consistency but because barrel harmonics are involved there so heavily it’s best to keep your hopes in check and out of the land of silliness.
I set up a partially blinded experiment with unfired, 2x fired and >5x fired cases. We (Coach and I) sized up 50rds of each from my Exact die and 50 of each from Coach’s Redding die and tested that in Coach’s rifle. Coach’s rifle has somewhere over 1900rds down the pipe now which is a concern as you’ll see soon. We set the ammo up identically in everything from components to neck tension. We ran 10 shot groups which were composed of 2 non-consecutive 5-shot groups fired at the same aim point. Coach loaded, packed and labelled the ammo boxes (labels are “1” and “2”) and didn’t tell me till after the shooting was done which was which. I pulled the rounds from the boxes, logged data and called the target to engage while coach did the shooting. That way neither of us knew during shooting which ammo was being fired at any given time. That was the best way I could think of for me to pull out experimenter induced bias with a research team of 2.
The result of the first accuracy test was null. That is to say that the numbers difference in average group size was not outside the level of statistical noise. The exception was with brand new brass. It always shot more consistently than reloaded brass and so I removed those results from the full data set due to the noise they introduced. We also weren’t meant to be testing new brass as that would not apply anyway but I wanted that data for another experiment I’ve been running. This is all precisely what was expected. I expected no big result (but certainly hoped for one) in accuracy simply by going to full length resizing and having extremely consistent neck tension and headspace.
Because the result is null though, we’ll re-run the experiment on that rifle just before we replace the barrel, just to verify the results reproduce reliably. We also did some velocity testing as part of that and there was no statistical change in average velocities or SD’s except that in the new brass loads but it was more consistent set to set. Why pull the barrel? The rifle used for that run of the experiment now has ~1900 rounds through it in 6XC with a single load spec (38.5gr H4350, F210M, Norma brass, 115gr HBN coated DTAC). The load is mild; generating only 2800fps, but we know that that barrel is within a few hundred of being pulled on principle; if not actual need, as far as match work goes and it may not be capable of the repeatable accuracy that might show up with the Exact die. So, we’ll try another barrel. A new one. Actually, a new two! So stay tuned, there’s more to come.
In September I purchased 2x new barrels which I got as blanks from the same production run (from Black Hole Weapons). I purchased a new custom reamer in 6XC that produces a chamber that is very tight to the dimensions of the Exact die. Thankfully you can order a reamer with any number of customizations and it’s still the same price as a custom reamer with just 1 custom dimension. Unfortunately it takes weeks for such a reamer to be made. Over the winter I handed the whole works over to a gunsmith friend of mine that also makes ultra-precise gauges as a business. So, he has the equipment and skills to set up barrels that are truly as identical as we could make them and identical enough for a useful experiment to come out of it despite a sample size that’s extremely small.
Anyway, I got both barrels cut, profiled and chambered identically. It was at great cost too. The cost to set each one up was double what I normally pay him to set up a barrel for me for each barrel with over 15 hours of work on each one. These are our new match barrels for the next 2 seasons too. Coach and I will be shooting from the same ammo box so we can share data. Maybe we’ll pick up a few points on same-day wind calls.
We did have a non-null result and from a different direction, which I also predicted. That was that with loads that were sized with my ExactShooting.com die we never had trouble closing the bolt. It was, in fact, always exactly the same effort. On the cases that we sized on the Redding neck die that Coach uses bolt close effort was either not much or a TON. Some post-facto testing later on with coach’s FL die showed the same random bolt close effort. This is obviously due to random headspacing which means that Coach’s FL die probably needs a thou or two buzzed off the bottom. Irrelevant though because we’re testing what’s available out of the box and his FL die out of the box didn’t cut it so I suspect that a lot of FL dies out there may be a little long or short and aren’t sizing things like people think they are.
That is only the results from a well used barrel. We will be running this exact same test using the 2 newly set up barrels. One will be on the same gun (Coach’s match rifle) while its twin which now has just under 400 rounds on it is on a different my “Hot Dog Gun” match rifle. I don’t expect any difference but I could wind up being surprised. The new barrel on Hot Dog Gun is extremely accurate so far, better than Coach’s rifle on its first day. We’ve already developed a load for the new barrel that runs things a bit faster (2980fps) so hopefully with more pressure more differences might start to manifest.
One of the cool things about the ES die is you can pull the body/shoulder portion out and still use the neck sizing portion which itself is easily adjustable for neck tension and neck sizing depth. When you start getting hard bolt close you can dial in .0005″ or .001″ or .0015″ or whatever amount of push-back on the shoulder with an easy click adjustment and know it’ll give that to you exactly. We’ll be running a neck tension accuracy test here real soon. We’ll see if .0005″ increments makes real differences on paper. First though, I’m ordering some brand new brass for that test.
Cost is fairly high for these dies but not unprecedented. That’s true but, beside the point. If you have the money then that’s not an issue anyway. Functionality is THE issue. It’s perfectly functional and makes it super easy to dial in neck tension at .0005″ increments for those really finicky loads, to dial neck sizing depth at .020″ increments and to dial how far back you actually push your shoulders in .0005″ increments. They’ll make one to a reamer print too. How precise are the dies? Well I had my machinist do some gauging to see if they were that precise and he was pretty darned impressed.
For benchrest guys and F-class guys, I think this is really packing the potential to up their game a bit but only because those guys tend toward having done everything else already. BR and F-class are the only places I can think of of offhand where neck tension and headspacing are tightly controlled by the shooters both routinely and with an obsession rarely seen.
Is it going to help joe sixpack? Well no, to be honest. Joe doesn’t know enough to get the potential benefit to begin with. Owners of this die will 100% want to keep their brass sorted by number of firings. They’ll know about what spring back is and why it’s important to them and a lot more. They will be the type that can’t deal with unexpected 5’s instead of 0’s or 1’s in the 4th decimal place of a measurement. The right owner for this die is someone very much like me in the respect that they are prone to setting up narrowly defined experiments and to analyze the statistical data that results before forming opinions. They’re nerds.
For Coach and I the benefit is being able to share ammo and ballistics data in a match, not running out of time anymore on match stages due to bolt cycling problems, not overworking or insufficiently sizing the brass and being able to make subtle adjustments with truly minimal effort as precisely as adjusting a tactical rifle scope.
Enjoy this freebie from BallisticXLR. ReloadingXLR is an Excel spreadsheet (compatible with Google Sheets, OpenOffice, and most other spreadsheet applications) for metallic cartridge reloaders looking to track the load performance, reloading costs, firearm inventory, box labels and statistical data.
A number of customers have asked for this resource and since it’s such a useful tool and I’m feeling generous, I’m giving it away for free to the masses. Download by clicking the image below and enjoy!
Here’s some screenshots of what’s included:
Individual Shot Log w/ Group Size calculator
Ammunition Box Labels
Reloading Cost Calculator
Shot String Graphs
Velocity:Temperature Multi-Session Analysis
Individual Shot Statistics
Version 10.3 is officially live. This much anticipated upgrade includes a new Loophole Shooting feature, an improved Calc Form, tons of minor formatting fixes and other improvements to make your long range shooting experience as rewarding and successful as possible.
NEW! Loophole Shooting Feature: In response to high demand the new Loophole Shooting feature has been implemented. This includes the required minimum vertical size of the loophole required to place a shot on target with the loophole placed 10 feet (3 meters) from the shooter. There is no other external ballistics application in the world that integrates this feature with your primary DOPE. At this time the Loophole data is only on the 100yrd/m increment Full Sheet tab. This is with the assumption that if you’re shooting from behind a loophole that you’ve got more time to set up your shots including setting up a sniper range card, justifying the extra data that’s on the 100m full-sheet tab compared to the 100yrd/m half-sheet tab. If there is sufficient demand we’ll add it to the 100yrd/m half-sheet tab in the next patch release.
Loophole Technical Details: The Loophole Shooting feature provides you a loophole size in inches or centimeters required to make the shot without hitting the edges of your loophole or the barrier it’s been created in. This feature requires careful measurement of your scope height. The level of precision required is now in the .0x inches zone but only if you plan to use the Loophole Shooting feature. If you do not ever need to use this feature then .1″ of slop in your measurement of scope height will be inconsequential.
Why Loophole Shooting: When BallisticXLR was partnered with the RexReviews project with TiborasaurusRex, Rex explicitly forbade providing this feature to the masses. Now that we’ve gone independent, we don’t have to withhold it anymore and in keeping with our custom of providing you the most capable system regardless of who might get upset about it, it’s now been released to the public. We are committed to providing continuous upgrades with new major features and minor features that are already planned as well as responding to the requests of those that use BallisticXLR.
Other Improvements: Major and minor improvements have been lavished upon BallisticXLR version 10.3 which, as our flagship product, it richly deserved. Some improvements include a simplified and improved Calc-Form, font size and color changes to make for easier reading in low light situations. We’ve put new Sniper Data & Shot record cards in to replace the older FM-23-10 derived versions. Quick start instructions on the inputs page have been clarified and simplified. Borders, colors, shading, contrast and may other elements of style have been tweaked to provide an improved user experience.
As always, the simple download is only $10. You should really consider getting a support entitlement as ballistics is a complex science and setting up a ballistics package as full featured as BallisticXLR can be a little daunting for the uninitiated despite our best efforts to make it as simple as possible. A basic Bronze support entitlement is only $50 and comes with a copy of BallisticXLR. We also have Silver and Gold support levels which increase the number of allowed support requests and reduce the maximum response time. All support entitlements also come with free upgrades for one full year! Don’t miss out on new stuff or 1:1 personalized help when you need it!
Existing Download-Only Customers: If you have purchased a download-only copy of BallisticXLR (does not include BallisticPRS or BallisticDLR) within the last 30 calendar days and would like the upgrade to Version 10.3, email firstname.lastname@example.org with your paypal transaction number & date of purchase and we’ll upgrade you free of charge.
Existing Support Entitlement Holders: If you purchased a support contract & download within the last 365 days you are entitled to a free upgrade to Version 10.3. To redeem your upgrade, email email@example.com with your paypal transaction number & date of purchase and we’ll upgrade you to Version 10.3 free of charge. This upgrade does not extend your support contract.
This is not my normal fare. If you’re not a computer geek you may find the following paragraphs a little bit technical and quite possibly uninteresting because of that. I’d encourage you to read on though as what you should come away with is a new way to look at the problems you face and a strategy for dealing with them that will bring you much personal satisfaction or at least will cause you to pull the least amount of hair out of your head as possible.
There is never anything really new in the world of computing. All we have are problems that have been solved before and new flavors of those same problems and solutions. What really changes is that people forget that we’ve already solved all of the really difficult problems many years ago. We had to because they were new problems when computing was something fresh in industry. Now that computing is pervasive what we have is a repeating cycle of identifying problems to be solved and figuring out how they’ve been solved before or ignoring the past (at our peril) and creating entirely new solutions which are in fact, just different colors of the same solutions we came up with before… if we’re lucky. That amounts to a statement like, “Well, we have a really complex problem, so here’s a stunningly complicated solution.”
I, for one, detest the idea that complex problems need newly invented ultra complex solutions simply because the problem appeared superficially (or actually is) complex or new. There is no problem so complicated that a very simple solution cannot be identified if you think about the problem the right way. There are insanely few problems which are in reality the least bit new. At best, they’re just the same problem in a new shape or color, so to speak. In a moment, you’ll be introduced to my preferred method of solving problems which always yields fairly simple solutions. It does that because it works like the thought process of early Macintosh computers. Early Mac’s were built; seemingly, with a notion something like, “Give them so little memory and processing power that they won’t be able to do anything anyway.” I must at this point give a wink and a nod to Douglas Adams who originally made that exact statement and from whom I’ve borrowed it. There’s a certain amount of sarcasm in that but hang with me and you’ll see my point.
What I mean by all of that is, simplifying the problem comes down to really seeing where the actual fundamental problem is (Mac users, of which I am one, wanting to do very intensive computational tasks on end-user grade hardware is the fundamental problem.) and not where the superficial problem is. In this case the superficial problem is one of Mac’s being the preferred platform for those doing computationally intensive tasks; like video editing for example because they’re user friendly, as opposed to Windows which is user unfriendly and UNIX/Linux which is downright user-hostile. UNIX/Linux server-grade hardware would be the right way to do these computationally intensive tasks but they suck to use for humans. So Mac users are the fundamental problem. They picked the wrong tool. Apple responded by making sure that the user would realize that and would eventually put those workloads onto higher end hardware. Now we have video editors doing very small bits of editing on very small bits of video on their Mac and then sending many such snippets to a larger compute cluster for rendering and final processing to come out with a whole “thing”.
Those familiar with “Grid Computing”, “High Performance Compute” and other flavors of the topic know that what you’re really dealing with is a system that understands bounded resource blocks and workload. What it amounts to is you have a bucket of resource (CPU/Memory/Disk/Network) capacity and a bucket of workloads that have a discreet moment of being started and which will run to “completion”. You want to dispatch computation jobs to be executed, allow them to run to completion and then report on the status and resources taken to accomplish that. What you don’t want to do is worry about uneven load profiles, manually intervening when jobs fail or systems lean over, or figuring out which host to execute a job on.
Some systems like LSF/OpenLava and others were created back in a day where there was a huge variety of capability as far as horsepower and there were lots of proprietary hardware platforms. Those factors joined with factors like making sure that software licenses which were few in number were always in use, fair share allocation of computational horsepower & software licenses and organizationally induced prioritization of this project versus that project.
Today, hardware performance is orders of magnitude better and we’re not so much worried about computational horsepower so much as footprint cost efficiency. Back in the old days we’d run on-premise clusters of large numbers of very expensive servers in very expensive data centers. Nowadays we Cloud Service Providers which can provide enormous amounts of extra computational capacity on-demand which can be spun up only for as long as it’s needed and then spun down immediately afterward to minimize run costs. We’ve eliminated the sunken portion of data center run cost from the equation.
As we all know, most of the really great inventions in history were made by eliminating something from a prior invention: A magnificent martini is made that way by the elimination, or at least minimization, of the Martini (vermouth) from the equation. In the same way, eliminating the concept of owning actual servers and putting the load in the cloud enables organizations to radically alter the cost associated with operating high performance computation grids.
Kubernetes has the ability to dispatch arbitrary code execution to nodes. The cluster is aware of what nodes are part of the cluster and how much load they’re under so it’s relatively easy to code in a little Python/Ruby/C/Whatever to interface with a SQL or NoSQL database to build a list of jobs needing dispatch and to get them dispatched. When there becomes a queue of jobs due to lacking of free resources the code can, with very simple boundary configurations, elect to launch new execution node instances on the CSP (Cloud Service Provider) infrastructure of choice or to persist with the queue having some non-zero depth.
The efficiency to be gained is not simply in the fact that the company no longer has to own large numbers of servers and to pay for the continuous operation of those servers regardless of their being fully utilized or not. A huge gain is in the simple fact that CSP’s tend toward pricing based on utilization of network bandwidth and data ingress/egress from their assorted block or object storage systems but not from in-cloud usage of those very same storage sub-systems. The actual cost of the CSP provided CPU cycles, memory utilization and in-cloud storage access is heavily subsidized by out-of-cloud network/storage IO charges. High performance compute grids are almost universally highly intense in their utilization of CPU and memory and are notoriously weak in their need to import/export large amounts of data from the computational environment.
The next big change we see is that jobs are not actually arbitrary in large part. Many jobs are regularized. That is, they are routine and come about as a byproduct of the development process. When you complete a piece of code, it needs unit tested and regression tested. When you design an ASIC it generates follow-on load which is predictable. Many organizations rely on grid computing to run routine, regular reports, analytics and business processes. These are things that can be statically defined either in code or in databases. It’s a standard workload. Everything else is arbitrary workload.
So what we have here is an incipient change in how HPC gets done. The hard part had always been dispatching jobs. Now the hard part is architectural. Orchestrating job dispatch has been made trivially easy. Discerning what is a static job versus what is an arbitrary job and causing Kubernetes configuration to be automated is the current challenge. This is actually trivially easy to accomplish because of the ease of determining the static versus arbitrary nature of any particular job.
I’m not saying that there’s no effort in creating the necessary bits of code and building the necessary back end systems to accomplish these goals. What I’m saying is that we no longer need to pay IBM’s (or whomever) extortionist license fees for LSF (or whatever) and we no longer need to maintain extensive farms of servers, difficult to manage and highly specialized grid computing engines which require expensive-as-hell HPC experts like myself. All you need now is a basic bitch sysadmin who knows extremely common and popular technologies like NoSQL/SQL, Python/Perl/Ruby, Linux, Kubernetes, Docker, etc… There are maybe a few thousand people in the USA that really know how to make IBM’s LSF grid computing software work and to troubleshoot it. There are probably a million or so Linux sysadmins (also like myself) who know NoSQL/SQL, Python/Perl/Ruby, Linux, Kubernetes, Docker, etc… and even if they don’t know one of more of those things, they’re all easy to learn if you’re already a Linux sysadmin. They’re easy to learn for us because they were bloody well meant to be. If we’re to use them, and we’re a lazy bunch which is why we automate everything we can figure out how to, it has to be easy to learn, easy to use and easy to automate or we won’t do it.
So, now that I’ve given you this off book use case for Kubernetes, get out and use it. Yes it’ll take a few weeks longer than LSF would to implement but in the end it’ll cost you millions of dollars less to maintain and you won’t have to pay IBM’s (or anyone else’s) heart thumping-ly exorbitant license fees which are deliberately structured to extract every possible last cent from your organization.
Go (to heck Big) Blue!
Epic scope. My only gripes (except the price point) are very minor quibbles in reality. Same perfect tracking, same great glass (actually some of the best ever in a USO), some real improvements in the turret setups. Some things are not so much improvements as changes but you can’t turn your nose up at a USO.