Now some of you will go back and take this example and run the math six different ways till Sunday and each will explain why my math is wrong because my assumptions are either too simplified or my method is not the way they do it. The problem is not the math. The problem is we can not assume things are ok or improving without actual data. As I explain to all the Lean Six Sigma professionals I mentor, you can apply any transformation approach to the data you want. It is all wrong. Wrong in that it is either a smoothing or averaging of numbers to make things look better and in statistics we, I, do it all the time but not to make things appear better. Typically we do it to see if we can "fit" data into a model that can help to explain what might be driving factors and if things stand out, we go off and confirm our findings. This is statistics, to "infer" what might be going on. For true statistical tolerancing we should be making the inference about what is going on based upon real data. Why? Because the real world is crappy and messy and our goal is to understand what is real and what we can predict based on what has happened. Not just a mathematical assumption, but a mathematical model based on reality. In this scenario let's go back to the supplier who is also confused with our numbers. It turns out that during the time we started having problems their performance was less than adequate. The Key performance Characteristic being measured was Insertion Loss of a wire. There were several wires in each assembly and over months of assemblies being built I had a good sampling of data. If I averaged the insertion loss of each wire in the assembly (all passed) the Capability Study showed a Cpk of 1.44. Very good. But, if I take each wire separately and study the capability each wires capability went down to a Cpk of 0.7 - horrible. In Minitab this would be the difference between stacking the individual values as compared to averaging across the values. So is it appropriate when you are seeking the root cause of a problem to draw assumptions that things can be normalized to see if your in a safe zone or in the woods? I say no way. The problem is the system demands each of the wires to perform independently and as such they need to be of high quality. Averaging their values smooths out the data and in effect takes away the extremes or poor performers. Then any analysis is therefore skewed away from reality. Why then would we apply RSS if it has no basis in reality? Instead I advocate for using real data to do statistical tolerancing. And for those of you that cry "we don't have part data for brand new designs" I say, then don't guess things will be normal. Use similar part performance, prototype data or a ask and expert in manufacturing what can be expected. And most importantly, as your design matures update your analysis with real data. In our case, using the actual numbers of the Insertion Loss our supplier was not at a 3sigma level as they were barely making a 2 sigma level of performance. We could predict based on actual performance, that we would be seeing a near 20% fallout and were were experiencing just that. So why would anyone want to assume normality? To learn more on that, read my article of Root Cause Analysis to see the biases we all carry with us.

Shop · Tikka T3/T3x/Tact Picatinny Rail – Phosphated. Product ID: S54065187. $149.99. Tikka Picatinny Rail ...

Part Number: ROB-18465 ; Availability: Call for availability ; Specifications. Details. Industrial Category: Fleet and Vehicle Maintenance. Industrial Subcategory ...

Oct 2, 2017 — Before using a cutting tool, it is necessary to understand tool cutting speeds and feed rates, more often referred to as speeds and feeds. ...

About Bosch Rexroth · Contact Locator · Trade Shows and Events.

• Connection Type: Arbor• Circle Diameter (Inch): 5/16"• Diameter of Cutter (Inch): 2-3/4"• Arbor Hole Diameter (Inch): 1"• Material: High Speed Steel• Number of Teeth: 12• High Speed Steel

Apr 2, 2019 — The first step to mastering any woodworking skill is to start with the right tools, and as usual, the best tool ...

Find many great new & used options and get the best deals for HARVEY, LA Advertising Postcard "GEORGE ENGINE CO." Oil Well Drilling Equipment at the best ...

Keyseat/Woodruff Cutters.

Mar 14, 2023 — Internally threaded jewelry requires a screw to be inserted into the jewelry to attach it to the post, while external threading uses a threaded ...

Image

• Connection Type: Arbor• Circle Diameter (Inch): 5/16"• Diameter of Cutter (Inch): 2-3/4"• Arbor Hole Diameter (Inch): 1"• Material: High Speed Steel• Number of Teeth: 12• High Speed Steel

Expressing my frustration to another Engineer later that day he said it best when he said "RSS is the Beer goggle lens of Mechanical Engineering". Yes life is a party with that kind of math but I couldn't explain the ills of using RSS by just poking fun at it. So here is the basic reason why RSS is not only bad Engineering, it should be banned. RSS math assumes the parts all go together and work in unison linearly and takes each tolerance (assuming unilateral) and reduces by a third to assume what the average standard deviation is. Then squaring the value and adding them up and taking the root square of the total to assume how it would be spread over the assembly. This is a forced normalization as we call it in the statistical world. In the real world its a bad idea because of three issues. First, not all tolerances are unilateral (balanced equally about the nominal value) but you can get around that; and secondly not all parts work in unison and in a linear fashion and; thirdly real world is just that real, not averaged. Some parts become restricted by other components and how they are assembled and the actual standard deviation of the parts is unknown at the time of applying RSS. very carefully all of this could be somewhat adjusted for. But the problem is even more fundamental than that. Let's go back to out 5 part example above. We know the 5 parts have to go together and when we add all their tolerances together we could have as much as +/-0.025 inches of displacement or in other words, 0.010" times 5 parts. That is a total window of .050 inches. If we divide the individual window of each part (.010") by 6, which is the assumption of 3 standard deviation on each side of the average, the number is then our assumed standard deviation of the part. Now it we square each value, sum those values and take the square root, we get what we assume is the +/- tolerance of 0.00372678" on average for the assembly. In other words we are guessing that instead of +/- .005" we should be closer to +/-.0037". Listen to what we are saying here. Because we put +/-.005" on each part, rather than adding that worse case for +/- .025 total for the assembly, lets try to back into a normalized tolerance for the assembly and apply the +/- 3 sigma value to it. So this +/- .0037 assumed assembly tolerance, times 6 equals to total spread under normal conditions to be .02236068" not the worse case of .050 we started with. I know I feel better now do you? Look we just eliminated 55.28% of the tolerance window which is a fantastic improvement. Wait, what? If you feel confused welcome to the club.

Most end mill flutes traditionally come in 2 or 4 flute variations. 2-flute end mills have generally been preferred when machining aluminum materials, since ...

Thumbport for Flute - Right Hand Rest allows greater balance in holding the flute comprised of a C-shaped shell and a thumb-like extension helps to place ...

In this particular scenario we were trying to determine why we were experiencing problems with our supplier and through modeling the parts based on the subcontractors drawings and tolerances we so an enormous stack up that left us very concerned. To add to our concerns, the supplier had added up their stack of of their design and came up with +/- 0.009 and in our case our numbers showed 0.020 for the same parts. More than 2X what it was supposed to be. No one was comfortable with the numbers and no one could explain it either. You see it was a complex assembly with at least 10 smaller components. So, in this exchange the most senior engineer says "Just apply RSS tolerancing, that would make me feel better if the numbers come out much smaller, we probably are not in trouble." Well I made a comment along the lines of why that was a bad idea but we went ahead and did the math anyway and low and behold the numbers came in much better and the analysis was no longer so scary.

Once upon a time I had an exchange with an Engineering group over RSS tolerancing. Now some of you may not know what RSS tolerancing is so let me explain some basics. Assume you have 5 components that go together to make an assembly. Nuts, bolts, washers kind of things. Engineering typically would assign tolerances to each part in the assembly, lets say +/- 0.005". In effect each part can vary by that amount specified and when they come together as an assembly all of the parts combined could, in theory, add up to +/-0.025". So far so good. Now, Engineering maybe asked to estimate what might happen in the real world view of things. Either because we are trying to predict what might happen or we are trying to figure out a problem that is happening. So, knowing that 3 sigma (3 standard deviations) from the average is 99.7% of all the population, you could take each tolerance value given and shave it down to fit into a 3 sigma window. In the RSS approach you divide the tolerance by 3, so a +/-.005 would be 0.001667". Then square each tolerance value, add them up and then take the square root of them to assume that number to be the actual real world standard deviation of the parts as they are produced. Then multiply by 3 and voilà ! You have a 99.7% confidence that is what you would have in the real world. Are you suspicious yet? You should be because it is a cheap magic trick that does deceive you.