My colleague J stopped in my office today with a curious fact: a technique learned by one of his students to approximate acute angles using nothing but a ruler. According to the student, to approximate the degree measure of an acute angle, simply mark the two sides of the angle at 3 inches from the vertex, measure the distance between these points, and multiply by 20. The result, according to the student, is the degree measure of the angle, within a couple of degrees.
That is, to approximate the measure of an angle, say,
simply measure a distance of 3 inches from the vertex along each ray. The claim is that the 20 times the length of the segment connecting these points is, more or less, the measure of the angle between the rays.
J thought this was a strange result, but he’d tried it on a couple of different angles by hand (measured against a protractor) and was impressed by its accuracy, so he stopped by with one simple question: was this just a happy coincidence, or was there a reasonable explanation for why this trick works so unreasonably well?
I pondered this for a while, and was actually surprised to find a reasonable explanation for this.
Update. In fact, the solution is so pleasing, and raises a few interesting and related issues, that I’m sending it off to be published in the College Journal of Mathematics. So until then, the rest of this post will be unavailable. However, feel free to contact me at email@example.com if you’re interested in the details.