Quote:
Originally Posted by Splorf22
[You must be logged in to view images. Log in or Register.]
So here is something to ponder Estu. Take your Dwarf Skeleton example. The Wilson test will say 1.5-2.7% probability for the Rusty Bastard Sword, and 0.1-0.5% probability for the Cloth Cap, or something like that. On the other hand, I look at that table and I'm guessing that 50% of skeletons drop a random rusty weapon or cloth dagger, i.e. that all of the items have the same drop probability. Can you really see Nilbog sitting there saying ho ho, 2.7% for the Rusty Bastard Sword and 0.2% for the Cloth cap?
I'm guessing that a Bayesian approach will do well here for stuff that drops many different possible items.
|
I assume you're talking about 'a decaying dwarf skeleton' based on the numbers you're giving. I've run it through my program, and here is what it says for those two items:
Number of decaying dwarf skeletons killed: 415
Proportion of them holding rusty bastard swords: 2.2%
Proportion of them holding small cloth caps: 0.2%
95% Wilson confidence interval for rusty bastard swords: 1.1%-4.1%
95% Wilson confidence interval for small cloth caps: 0%-1.4%
A user looking at this could conclude that there is a 1% drop rate for all the 'common' items (that 1.1% lower bound on the rusty bastard swords is pretty close to 1%, and given that there are about 20 common items, it's not surprising that one or two would fall outside of the 95% confidence interval). My conclusion is that the Wilson confidence intervals give good results that are consistent with reasonable assumptions (i.e. that all these items in reality drop at the same rate). However, I don't know the theoretical or practical differences between Wilson intervals and a Bayesian approach, so I'd be interested in hearing about it.