ExactShooting.com Custom Sizing Die – Experiment #1 Results

So we have initial results. I’d like to thank you all for the views on my video.

We will be testing this die set more over the next year. This is out of my pocket and out of my own curiosity. I have the credit card bills and had the arguments with my wife to show for it. I must caution, because of some things people seem to have in their heads, that this isn’t ever going to make a 1″ gun into a .5″ gun. Anyone suggesting such a thing is either a fool or a liar. What you should be expecting is to reduce variability in your ammo which reduces things like flyers & SD’s. Effects on group size, maybe small ones should be expected as a normal effect of better consistency but because barrel harmonics are involved there so heavily it’s best to keep your hopes in check and out of the land of silliness.

I set up a partially blinded experiment with unfired, 2x fired and >5x fired cases. We (Coach and I) sized up 50rds of each from my Exact die and 50 of each from Coach’s Redding die and tested that in Coach’s rifle. Coach’s rifle has somewhere over 1900rds down the pipe now which is a concern as you’ll see soon. We set the ammo up identically in everything from components to neck tension. We ran 10 shot groups which were composed of 2 non-consecutive 5-shot groups fired at the same aim point. Coach loaded, packed and labelled the ammo boxes (labels are “1” and “2”) and didn’t tell me till after the shooting was done which was which. I pulled the rounds from the boxes, logged data and called the target to engage while coach did the shooting. That way neither of us knew during shooting which ammo was being fired at any given time. That was the best way I could think of for me to pull out experimenter induced bias with a research team of 2.

The result of the first accuracy test was null. That is to say that the numbers difference in average group size was not outside the level of statistical noise. The exception was with brand new brass. It always shot more consistently than reloaded brass and so I removed those results from the full data set due to the noise they introduced. We also weren’t meant to be testing new brass as that would not apply anyway but I wanted that data for another experiment I’ve been running. This is all precisely what was expected. I expected no big result (but certainly hoped for one) in accuracy simply by going to full length resizing and having extremely consistent neck tension and headspace.

Because the result is null though, we’ll re-run the experiment on that rifle just before we replace the barrel, just to verify the results reproduce reliably. We also did some velocity testing as part of that and there was no statistical change in average velocities or SD’s except that in the new brass loads but it was more consistent set to set. Why pull the barrel? The rifle used for that run of the experiment now has ~1900 rounds through it in 6XC with a single load spec (38.5gr H4350, F210M, Norma brass, 115gr HBN coated DTAC). The load is mild; generating only 2800fps, but we know that that barrel is within a few hundred of being pulled on principle; if not actual need, as far as match work goes and it may not be capable of the repeatable accuracy that might show up with the Exact die. So, we’ll try another barrel. A new one. Actually, a new two! So stay tuned, there’s more to come.

In September I purchased 2x new barrels which I got as blanks from the same production run (from Black Hole Weapons). I purchased a new custom reamer in 6XC that produces a chamber that is very tight to the dimensions of the Exact die. Thankfully you can order a reamer with any number of customizations and it’s still the same price as a custom reamer with just 1 custom dimension. Unfortunately it takes weeks for such a reamer to be made. Over the winter I handed the whole works over to a gunsmith friend of mine that also makes ultra-precise gauges as a business. So, he has the equipment and skills to set up barrels that are truly as identical as we could make them and identical enough for a useful experiment to come out of it despite a sample size that’s extremely small.

Anyway, I got both barrels cut, profiled and chambered identically. It was at great cost too. The cost to set each one up was double what I normally pay him to set up a barrel for me for each barrel with over 15 hours of work on each one. These are our new match barrels for the next 2 seasons too. Coach and I will be shooting from the same ammo box so we can share data. Maybe we’ll pick up a few points on same-day wind calls.

We did have a non-null result and from a different direction, which I also predicted. That was that with loads that were sized with my ExactShooting.com die we never had trouble closing the bolt. It was, in fact, always exactly the same effort. On the cases that we sized on the Redding neck die that Coach uses bolt close effort was either not much or a TON. Some post-facto testing later on with coach’s FL die showed the same random bolt close effort. This is obviously due to random headspacing which means that Coach’s FL die probably needs a thou or two buzzed off the bottom. Irrelevant though because we’re testing what’s available out of the box and his FL die out of the box didn’t cut it so I suspect that a lot of FL dies out there may be a little long or short and aren’t sizing things like people think they are.

That is only the results from a well used barrel. We will be running this exact same test using the 2 newly set up barrels. One will be on the same gun (Coach’s match rifle) while its twin which now has just under 400 rounds on it is on a different my “Hot Dog Gun” match rifle. I don’t expect any difference but I could wind up being surprised. The new barrel on Hot Dog Gun is extremely accurate so far, better than Coach’s rifle on its first day. We’ve already developed a load for the new barrel that runs things a bit faster (2980fps) so hopefully with more pressure more differences might start to manifest.

One of the cool things about the ES die is you can pull the body/shoulder portion out and still use the neck sizing portion which itself is easily adjustable for neck tension and neck sizing depth. When you start getting hard bolt close you can dial in .0005″ or .001″ or .0015″ or whatever amount of push-back on the shoulder with an easy click adjustment and know it’ll give that to you exactly. We’ll be running a neck tension accuracy test here real soon. We’ll see if .0005″ increments makes real differences on paper. First though, I’m ordering some brand new brass for that test.

Cost is fairly high for these dies but not unprecedented. That’s true but, beside the point. If you have the money then that’s not an issue anyway. Functionality is THE issue. It’s perfectly functional and makes it super easy to dial in neck tension at .0005″ increments for those really finicky loads, to dial neck sizing depth at .020″ increments and to dial how far back you actually push your shoulders in .0005″ increments. They’ll make one to a reamer print too. How precise are the dies? Well I had my machinist do some gauging to see if they were that precise and he was pretty darned impressed.

For benchrest guys and F-class guys, I think this is really packing the potential to up their game a bit but only because those guys tend toward having done everything else already. BR and F-class are the only places I can think of of offhand where neck tension and headspacing are tightly controlled by the shooters both routinely and with an obsession rarely seen.

Is it going to help joe sixpack? Well no, to be honest. Joe doesn’t know enough to get the potential benefit to begin with. Owners of this die will 100% want to keep their brass sorted by number of firings. They’ll know about what spring back is and why it’s important to them and a lot more. They will be the type that can’t deal with unexpected 5’s instead of 0’s or 1’s in the 4th decimal place of a measurement. The right owner for this die is someone very much like me in the respect that they are prone to setting up narrowly defined experiments and to analyze the statistical data that results before forming opinions. They’re nerds.

For Coach and I the benefit is being able to share ammo and ballistics data in a match, not running out of time anymore on match stages due to bolt cycling problems, not overworking or insufficiently sizing the brass and being able to make subtle adjustments with truly minimal effort as precisely as adjusting a tactical rifle scope.

Advertisements

Freebie – ReloadingXLR – An Excel Based Reloading Spreadsheet

Enjoy this freebie from BallisticXLR. ReloadingXLR is an Excel spreadsheet (compatible with Google Sheets, OpenOffice, and most other spreadsheet applications) for metallic cartridge reloaders looking to track the load performance, reloading costs, firearm inventory, box labels and statistical data.

A number of customers have asked for this resource and since it’s such a useful tool and I’m feeling generous, I’m giving it away for free to the masses. Download by clicking the image below and enjoy!

-Meccastreisand

Here’s some screenshots of what’s included:
Individual Shot Log w/ Group Size calculator

Weapon Database:

Ammunition Box Labels

Unit Conversions

Reloading Cost Calculator

Shot String Graphs

Velocity:Temperature Multi-Session Analysis

Individual Shot Statistics

Upgrade Time: BallisticXLR Version 10.3 Is LIVE!

Version 10.3 is officially live. This much anticipated upgrade includes a new Loophole Shooting feature, an improved Calc Form, tons of minor formatting fixes and other improvements to make your long range shooting experience as rewarding and successful as possible.

NEW! Loophole Shooting Feature: In response to high demand the new Loophole Shooting feature has been implemented. This includes the required minimum vertical size of the loophole required to place a shot on target with the loophole placed 10 feet (3 meters) from the shooter. There is no other external ballistics application in the world that integrates this feature with your primary DOPE. At this time the Loophole data is only on the 100yrd/m increment Full Sheet tab. This is with the assumption that if you’re shooting from behind a loophole that you’ve got more time to set up your shots including setting up a sniper range card, justifying the extra data that’s on the 100m full-sheet tab compared to the 100yrd/m half-sheet tab. If there is sufficient demand we’ll add it to the 100yrd/m half-sheet tab in the next patch release.

Loophole Technical Details: The Loophole Shooting feature provides you a loophole size in inches or centimeters required to make the shot without hitting the edges of your loophole or the barrier it’s been created in. This feature requires careful measurement of your scope height. The level of precision required is now in the .0x inches zone but only if you plan to use the Loophole Shooting feature. If you do not ever need to use this feature then .1″ of slop in your measurement of scope height will be inconsequential.

Why Loophole Shooting: When BallisticXLR was partnered with the RexReviews project with TiborasaurusRex, Rex explicitly forbade providing this feature to the masses. Now that we’ve gone independent, we don’t have to withhold it anymore and in keeping with our custom of providing you the most capable system regardless of who might get upset about it, it’s now been released to the public. We are committed to providing continuous upgrades with new major features and minor features that are already planned as well as responding to the requests of those that use BallisticXLR.

Other Improvements: Major and minor improvements have been lavished upon BallisticXLR version 10.3 which, as our flagship product, it richly deserved. Some improvements include a simplified and improved Calc-Form, font size and color changes to make for easier reading in low light situations. We’ve put new Sniper Data & Shot record cards in to replace the older FM-23-10 derived versions. Quick start instructions on the inputs page have been clarified and simplified. Borders, colors, shading, contrast and may other elements of style have been tweaked to provide an improved user experience.

As always, the simple download is only $10. You should really consider getting a support entitlement as ballistics is a complex science and setting up a ballistics package as full featured as BallisticXLR can be a little daunting for the uninitiated despite our best efforts to make it as simple as possible. A basic Bronze support entitlement is only $50 and comes with a copy of BallisticXLR. We also have Silver and Gold support levels which increase the number of allowed support requests and reduce the maximum response time. All support entitlements also come with free upgrades for one full year! Don’t miss out on new stuff or 1:1 personalized help when you need it!

Existing Download-Only Customers: If you have purchased a download-only copy of BallisticXLR (does not include BallisticPRS or BallisticDLR) within the last 30 calendar days and would like the upgrade to Version 10.3, email ballisticxlr@gmail.com with your paypal transaction number & date of purchase and we’ll upgrade you free of charge.

Existing Support Entitlement Holders: If you purchased a support contract & download within the last 365 days you are entitled to a free upgrade to Version 10.3. To redeem your upgrade, email ballisticxlr@gmail.com with your paypal transaction number & date of purchase and we’ll upgrade you to Version 10.3 free of charge. This upgrade does not extend your support contract.

KubeGrid: Using Kubernetes to Supplant Common Grid Computing Engines

This is not my normal fare. If you’re not a computer geek you may find the following paragraphs a little bit technical and quite possibly uninteresting because of that. I’d encourage you to read on though as what you should come away with is a new way to look at the problems you face and a strategy for dealing with them that will bring you much personal satisfaction or at least will cause you to pull the least amount of hair out of your head as possible.

Start here: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion

There is never anything really new in the world of computing. All we have are problems that have been solved before and new flavors of those same problems and solutions. What really changes is that people forget that we’ve already solved all of the really difficult problems many years ago. We had to because they were new problems when computing was something fresh in industry. Now that computing is pervasive what we have is a repeating cycle of identifying problems to be solved and figuring out how they’ve been solved before or ignoring the past (at our peril) and creating entirely new solutions which are in fact, just different colors of the same solutions we came up with before… if we’re lucky. That amounts to a statement like, “Well, we have a really complex problem, so here’s a stunningly complicated solution.”

I, for one, detest the idea that complex problems need newly invented ultra complex solutions simply because the problem appeared superficially (or actually is) complex or new. There is no problem so complicated that a very simple solution cannot be identified if you think about the problem the right way. There are insanely few problems which are in reality the least bit new. At best, they’re just the same problem in a new shape or color, so to speak. In a moment, you’ll be introduced to my preferred method of solving problems which always yields fairly simple solutions. It does that because it works like the thought process of early Macintosh computers. Early Mac’s were built; seemingly, with a notion something like, “Give them so little memory and processing power that they won’t be able to do anything anyway.” I must at this point give a wink and a nod to Douglas Adams who originally made that exact statement and from whom I’ve borrowed it. There’s a certain amount of sarcasm in that but hang with me and you’ll see my point.

What I mean by all of that is, simplifying the problem comes down to really seeing where the actual fundamental problem is (Mac users, of which I am one, wanting to do very intensive computational tasks on end-user grade hardware is the fundamental problem.) and not where the superficial problem is. In this case the superficial problem is one of Mac’s being the preferred platform for those doing computationally intensive tasks; like video editing for example because they’re user friendly, as opposed to Windows which is user unfriendly and UNIX/Linux which is downright user-hostile. UNIX/Linux server-grade hardware would be the right way to do these computationally intensive tasks but they suck to use for humans. So Mac users are the fundamental problem. They picked the wrong tool. Apple responded by making sure that the user would realize that and would eventually put those workloads onto higher end hardware. Now we have video editors doing very small bits of editing on very small bits of video on their Mac and then sending many such snippets to a larger compute cluster for rendering and final processing to come out with a whole “thing”.

Those familiar with “Grid Computing”, “High Performance Compute” and other flavors of the topic know that what you’re really dealing with is a system that understands bounded resource blocks and workload. What it amounts to is you have a bucket of resource (CPU/Memory/Disk/Network) capacity and a bucket of workloads that have a discreet moment of being started and which will run to “completion”. You want to dispatch computation jobs to be executed, allow them to run to completion and then report on the status and resources taken to accomplish that. What you don’t want to do is worry about uneven load profiles, manually intervening when jobs fail or systems lean over, or figuring out which host to execute a job on.

Some systems like LSF/OpenLava and others were created back in a day where there was a huge variety of capability as far as horsepower and there were lots of proprietary hardware platforms. Those factors joined with factors like making sure that software licenses which were few in number were always in use, fair share allocation of computational horsepower & software licenses and organizationally induced prioritization of this project versus that project.

Today, hardware performance is orders of magnitude better and we’re not so much worried about computational horsepower so much as footprint cost efficiency. Back in the old days we’d run on-premise clusters of large numbers of very expensive servers in very expensive data centers. Nowadays we Cloud Service Providers which can provide enormous amounts of extra computational capacity on-demand which can be spun up only for as long as it’s needed and then spun down immediately afterward to minimize run costs. We’ve eliminated the sunken portion of data center run cost from the equation.

As we all know, most of the really great inventions in history were made by eliminating something from a prior invention: A magnificent martini is made that way by the elimination, or at least minimization, of the Martini (vermouth) from the equation. In the same way, eliminating the concept of owning actual servers and putting the load in the cloud enables organizations to radically alter the cost associated with operating high performance computation grids.

Kubernetes has the ability to dispatch arbitrary code execution to nodes. The cluster is aware of what nodes are part of the cluster and how much load they’re under so it’s relatively easy to code in a little Python/Ruby/C/Whatever to interface with a SQL or NoSQL database to build a list of jobs needing dispatch and to get them dispatched. When there becomes a queue of jobs due to lacking of free resources the code can, with very simple boundary configurations, elect to launch new execution node instances on the CSP (Cloud Service Provider) infrastructure of choice or to persist with the queue having some non-zero depth.

The efficiency to be gained is not simply in the fact that the company no longer has to own large numbers of servers and to pay for the continuous operation of those servers regardless of their being fully utilized or not. A huge gain is in the simple fact that CSP’s tend toward pricing based on utilization of network bandwidth and data ingress/egress from their assorted block or object storage systems but not from in-cloud usage of those very same storage sub-systems. The actual cost of the CSP provided CPU cycles, memory utilization and in-cloud storage access is heavily subsidized by out-of-cloud network/storage IO charges. High performance compute grids are almost universally highly intense in their utilization of CPU and memory and are notoriously weak in their need to import/export large amounts of data from the computational environment.

The next big change we see is that jobs are not actually arbitrary in large part. Many jobs are regularized. That is, they are routine and come about as a byproduct of the development process. When you complete a piece of code, it needs unit tested and regression tested. When you design an ASIC it generates follow-on load which is predictable. Many organizations rely on grid computing to run routine, regular reports, analytics and business processes. These are things that can be statically defined either in code or in databases. It’s a standard workload. Everything else is arbitrary workload.

So what we have here is an incipient change in how HPC gets done. The hard part had always been dispatching jobs. Now the hard part is architectural. Orchestrating job dispatch has been made trivially easy. Discerning what is a static job versus what is an arbitrary job and causing Kubernetes configuration to be automated is the current challenge. This is actually trivially easy to accomplish because of the ease of determining the static versus arbitrary nature of any particular job.

I’m not saying that there’s no effort in creating the necessary bits of code and building the necessary back end systems to accomplish these goals. What I’m saying is that we no longer need to pay IBM’s (or whomever) extortionist license fees for LSF (or whatever) and we no longer need to maintain extensive farms of servers, difficult to manage and highly specialized grid computing engines which require expensive-as-hell HPC experts like myself. All you need now is a basic bitch sysadmin who knows extremely common and popular technologies like NoSQL/SQL, Python/Perl/Ruby, Linux, Kubernetes, Docker, etc… There are maybe a few thousand people in the USA that really know how to make IBM’s LSF grid computing software work and to troubleshoot it. There are probably a million or so Linux sysadmins (also like myself) who know NoSQL/SQL, Python/Perl/Ruby, Linux, Kubernetes, Docker, etc… and even if they don’t know one of more of those things, they’re all easy to learn if you’re already a Linux sysadmin. They’re easy to learn for us because they were bloody well meant to be. If we’re to use them, and we’re a lazy bunch which is why we automate everything we can figure out how to, it has to be easy to learn, easy to use and easy to automate or we won’t do it.

So, now that I’ve given you this off book use case for Kubernetes, get out and use it. Yes it’ll take a few weeks longer than LSF would to implement but in the end it’ll cost you millions of dollars less to maintain and you won’t have to pay IBM’s (or anyone else’s) heart thumping-ly exorbitant license fees which are deliberately structured to extract every possible last cent from your organization.

Go (to heck Big) Blue!

US Optics B-17 Review – Very Nice!

Epic scope. My only gripes (except the price point) are very minor quibbles in reality. Same perfect tracking, same great glass (actually some of the best ever in a USO), some real improvements in the turret setups. Some things are not so much improvements as changes but you can’t turn your nose up at a USO.

Barrel Life, Accuracy and Velocity – Columbia River Arms Wins!

I’m running a .243AI set up by Columbia River Arms (formerly Black Hole Weaponry) about a year ago. It’s a pre-chambered drop-in with a pretty tightly necked chamber set up by CRA. I’ve got it set at zero head space so between that and the Ackley Improved case there’s zero brass growth after 4-5 firings.
It’s got just a touch over 1000 rounds down the pipe and appears to be going strong. So far I’ve only had to push the bullet out .010 and add .1gn powder to keep everything tight to my original load spec. I don’t know what kind of life the pipe has left in it. I’m running 115gn 6mm DTAC bullets at 3200fps with a modest charge of very slow burning powder (RL-23). Pressures are pretty mellow but it’s, for sure, burning that powder all the way down the barrel. This is evidenced by the fact that there’s just the tiniest bit of flash in the first chamber of my brake that’s visible in low light conditions.
Corey testing out the CRA barreled Hot-Dog Gun at 900yrds.
In a more conventional barrel I’d guess I’ve have between 100 and 300 rounds more life before it’s just not match grade anymore (based on a 1200-1500rnd life expectancy) but I would also expect substantially more throat erosion than I’ve gotten to this point if that were the case. I started with uncoated 108ELD’s and quickly went to HBN (hexagonal boron nitride) coated 115DTAC’s. The rebated boat tail and pointed tip on the DTAC’s pulls the BC up to .620 which puts me up to 1mile of supersonic range. So far it’s been as far as 1500yrds and proven itself very capable.
Out of the gate I was getting 10 shot groups like those below (these are fireforming and load development groups, the first loads out of the barrel). After a little refinement they settled down to repeatable .5-.7MOA across 10 shots with single digit SD’s (5fps across over 100 rounds loaded in 3 sessions). The thing has since then been ridiculously consistent. Once I found an optic I could deal with in matches (I hated the turrets on Vortex Razor 2’s, U.S. Optics ER-25 was just too damned big, SWFA 16×42 was too much minimum magnification, etc… nitpicky stuff) in the form of the U.S. Optics SN3 3.8-22x58mm with a custom made PRS oriented reticle and 35mm main tube, I really started to have some fun with it including punishing the rifle with 10 shots strings in 90 seconds on hot days (hey, that’s the stage on the match). I wasn’t going to take it easy on this barrel.
I crossed the 1000 round mark last month at a match and I’d thought the barrel might be toasted then due to some repeated and huge misses on otherwise simple shots. Turns out it was just me. I clearly did something wrong to make those misses. I know that because I went out again this month to teach a long range precision rifle class and demonstrated most drills and techniques with my .243AI. It started out by making a .5″ 5 shot group @ 100 yards. At the end of the class it got to be time to see what I could do under some performance pressure so I got right down into the prone with my Columbia River Arms barreled Savage 10FPSR, dialed the parallax on my U.S. Optics SN3 3.8-22×58, extended the Accuracy Solutions BipodEXT, set the Accu-Tac SR-5 bipod to 45deg forward and slapped a 6″ 900 yard 5 shot group on the steel rapid fire in direction shifting 5-15mph winds while the student body looked on.
I’m using 45.6gn of powder now. It started at 45.5gn of Reloader 23 in a very tight chamber with Hornady brass. By the book one should expect to see 3000-3100fps with 44-45gn of powder in a 24-inch barrel with 100gn to 105gn bullets. I’m getting 3200fps with 115’s and only 45.6gn in a 26″ barrel. I’d expect to see 25fps or thereabouts per inch of barrel after 24″ but certainly not 50fps per inch from barrel length alone and not with a heavier longer bullet. I’m also not even remotely pushing this round. I can go another 3gn of powder before even starting to flatten primers but 3300fps only serves to damage steel targets and is technically against the rules. 3200fps is max so that’s what I’m running. I already damage quite a few targets at 3200fps anyway so I don’t need any help in that department.
Hot Dog Gun in .243AI. A Savage 10FPSR with bits from MDT, XLR, Magpul, BipodEXT, Accu-Tac, U.S. Optics, Seekins, Weaver, and JP Enterprises. Painted to look like a Dodger Dog. Go Dodgers!

Typically as I wear out a barrel I’ll see it shoot fine, fine, fine, start to open up, plateau, fine at plateau, open up more, open up more, open up more and it’s all downhill from there. After the plateau if it doesn’t quickly plateau again it’s getting there and it’s time to start planning my next pipe. I’ve already started planning my next pipe, a 6XC to match an identical one we’ll put on Coach’s gun. Nonetheless, this barrel is still good. Question is, for how long?

I know from prior experience that I get a little longer barrel life from the polygonal rifling that CRA uses. I’ve not burned out enough to get a useful statistical value for how much longer but I can speculate. Right now, given the throat wear and grouping we’re getting on Coach’s existing 6XC; which is at 1500 rounds so far, and the expected life of that Shilen barrel being around 2200-2300 rounds, I’m estimating; and trying to be extremely conservative in that estimation, that I’ll make it to 1800 rounds or further before this pipe is really done for match work.

The difference between .243Win (right) and .243AI (left) is shoulder angle, body taper, performance, case life and barrel life. The loaded round has a 108gn ELD-M in it and 39.5gn of RL-23 for fireforming.

That’s almost 40% longer barrel life than I initially anticipated, if it gets there. We knew that the HBN coating on the bullets would help barrel life so I’m confident it’ll get to 1500. We knew the CRA polygonal rifling means no sharp edges for the burning powder plasma to ablate would help too. We knew the Ackley shoulder angle would keep the flame vertex inside the case neck and that that would help too.

It’s just with all those things helping, we have no idea where this train is going to stop. If I go on throat erosion alone, calculating how far until the boat tail is up inside the case neck, then I’m looking at almost 3000 rounds of barrel life. That’d be 230% of anticipated barrel life and I just don’t see that as being realistic given the amount of powder being burned and the rapidity with which I shoot in matches. I’ll get that barrel pretty hot sometimes.

Shooting stage 6 at my monthly match with Hot Dog Gun in its current form. Targets are on the opposite hillside from 300-700yrds away.

I get higher velocities than one might expect from less powder than one might expect. I get longer barrel life than one would expect. I get amazingly accurate and consistent performance than one might expect (especially for a drop-in pre-fit). The thing turned out sub-MOA groups with fire forming loads. It did not like 55gn varmint bullets at all though. No surprise on an 8 twist. The chamber on it is very tight. It’s meant for someone who’s willing to turn necks if necessary (my inside neck diameter on a fired case is .2435). Thankfully I don’t have to neck turn. Lucky me, everything just fits perfectly. When I ordered it I specified that I would not be put off by a possible requirement to neck turn brass if that were what their reamer would require.

Much of this situation was and is by design. When I initially decided I wanted a fast 6mm I found what my options were and then picked a chamber that would maximize performance, brass life and throat life. I picked a powder that would give maximum velocities without pressures being tall or a lot of flash. I picked projectiles that had very high BC’s and would be routinely available in boxes of 500 (including a primary and backup bullet). I set up a load that performs identically with both bullets and shoots to the same point of aim, just in case I’m unable to re-up on one I can use my backup supply of the other.  I bought all of the brass, powder and primers I expected to ever use in this barrel ahead of time (8lbs of powder, looks like I might need another 8lbs). Everything about the gun except the optic I’d settle on was decided before the barrel even arrived. Best of all, the barrel was set up to CRA’s rigorous standards which means it was done perfectly and it was under $400.

Hot Dog Gun before it was even painted. Getting some early long range testing done. Both Vortex Razor 2’s are now replaced with U.S. Optics glass. I just like USO. What can I say, they work for me.

So why am I building a 6XC now? Well Coach and I shoot together. It’s best if we have one set of ballistics DOPE and shoot the exact same load through identical chambers. It’s actually best if we share a gun but I like mine and he likes his. We find that when we can use drop and wind corrections from each other that we win more matches. Duh. If I run a stage and miss 2 of 7 shots on wind, I can tell him what the adjustments would have been and what the wind was for those misses then he can adjust accordingly and pick up those points and vice versa.

So, I’ve got 2 new barrels on the way from CRA, 27″ 6mm 8-twist unprofiled blanks which we’ll have a local gunsmith chamber, thread and profile for us in 6XC with a .267 neck (CRA doesn’t have a 6XC reamer or I’d have them do it). We’ll set them up for zero head space to minimize brass growth and then we’ll use my new ExactShooting.com Custom Collection sizing die to perfectly set the head space and neck tension of our reloaded ammo. We’ll be as close to shooting the same rifle as two guys can possibly get. If you want faster velocities, longer barrel life and one heck of an accurate barrel, you could do a lot worse than to drop Columbia River Arms a line.

.243AI Dimensions

6XC Dimensions