Possible Float Math Bug
|
|
DoteDote Edison
Thinks Too Much
Join date: 6 Jun 2004
Posts: 790
|
11-28-2005 00:21
Unless my logic is flawed, there is a bug somewhere in this multiplication process. What's curious is that it seems similar to the bug that causes prim cuts to not function as expected (clicking the up/down arrow to adjust the cut doesn't work in 0.05 increments). Here's the script I used to test. Drop it into a prim and start clicking. Clicking up from 0 to 100 the first time produces no error. But, going from 100 to 0 introduces errors at; 39, 34, 29, 24, 19, 14, 9, 4 (0 is correct). Then going back up repeats the errors; 4, 9, 14, 19, 24, 29, 34, 39 (45 to 100 is correct again.) The script takes a beginning 0.0 float value, multiplies it by 100 to get a percent, casts the float to an integer and reports the result in chat. Each touch adds 0.05 to the float until it reaches 1.0, at which point the direction reverses, subtracting 0.05 until the float reaches 0.0 again. The cycle repeats as often as you can click. float x = 0.0; integer goingUp = TRUE;
default { touch_start(integer total_number) { float percent = x * 100; llOwnerSay("Percent = " + (string)percent); llOwnerSay("Percent = " + (string)((integer)percent)); if( goingUp ) { x += 0.05; if( x > 1.0 ) goingUp = FALSE; } else if ( !goingUp ) { x -= 0.05; if( x < 0.0 ) goingUp = TRUE; } } }
|
|
Ben Bacon
Registered User
Join date: 14 Jul 2005
Posts: 809
|
11-28-2005 02:43
I think you are bumping into the age-old, everyday floating point precision problem. In this case the snafu is actually happening during the addition/subtraction, and not during the multiplication.
The technical explanation is that because of the particular representation LSL (and many, many other languages) uses for floating point numbers, and because computers do everything in binary, LSL can remember 0.5, 0.25, 0.125 (a half, a quater, an eighth) etc etc perfectly - but not 0.05.
Try this - use long division to calculate 1 divided by 7. You will notice that after a few digits you get stuck in a loop that doesn't end. You are going to have to choose at what point you give up and accept the result as a good-enough approximation.
Well, LSL has the same problem with 0.05 (if you know binary long division, try dividing 1b by 10100b). After adding and subtracting 0.05 enough you reach a point where the answer appears to be 39.99999999.... instead of 40.
Solution: The solution varies - but for this example where you do not need incredibly large values, and few decimal places (like percentages, or currency, etc) - you could use fixed point maths. Instead of adding $0.05 and $2.05, for example, code all your stuff using cents - add 5 to 205. Instead of adding 0.05 and running for 0.0 to 1.0, add 5 and run from 0 to 100.
|
|
Travis Bjornson
Registered User
Join date: 25 Sep 2005
Posts: 188
|
11-28-2005 08:29
Yes, this bug exists in every machine that I know of when using floats. I think it's actually the CPU that does the math, and as I recall, if you take the square root of 25 on an Apple IIe, it gives a result of about 25.00000005.
As a workaround, try this to round off the floats:
touch_start(integer total_number) { float percent = x * 10000; percent = llRound(percent); percent = percent / 100;
|
|
Alain Talamasca
Levelheaded Nutcase
Join date: 21 Sep 2005
Posts: 393
|
11-28-2005 09:06
A big chunk of what it ties to is that .1 is an irrational number in binary; therefor, anything you multiply by .1 (or .05 or .025) is going to be, at best, an approximation.
Once upon a time, I wrote software as a contractor for what is now a major producer of financial software. (many of you have probably done your taxes on software that, in its early stages, was touched by me... wheeee... I am a celeb!)
Anyway... because of this exact issue, most financial software is done strictly with integer math, and the decimal point is simply a display convention. One Dollar(Pound, Euro, whatever) is actually calculated in pennies or even milles(tenths of a penny or 1/1000 of a dollar. It's what gas prices are actually measured in... $2.43&9/10 ), and then the display is parsed out with the decimal in the right place. This doesn't make the math any easier to calculate, but it DOES keep us from losing pennies along the way.
_____________________
Alain Talamasca, Ophidian Artisans - Fine Art for your Person, Home, and Business. Pando (105, 79, 99)
|
|
DoteDote Edison
Thinks Too Much
Join date: 6 Jun 2004
Posts: 790
|
11-28-2005 11:28
Thanks for the replies and work-arounds. Sometimes I understand the most complex things, but overlook a common framework issue such as float math limitations. Basically, I was making an audio volume control for a friend's radio, and wanted the float range of 0.0 - 1.0 converted to percent. Now hopefully this thread will die quickly 
|
|
Chelsea Cork
Registered User
Join date: 13 Apr 2006
Posts: 2
|
04-14-2006 01:16
Bringing up a really old thread, but related to this topic does anyone happen to know what the machine epsilon ( http://en.wikipedia.org/wiki/Machine_epsilon) is for floats in LSL?
|
|
Strife Onizuka
Moonchild
Join date: 3 Mar 2004
Posts: 5,887
|
04-14-2006 04:03
LSL is writen in C++ using one of the commercial available compilers, (the client is writen in MSVC, they are using one of the .Net releases but not writing .Net code) The servers are probably compiled with GCC. Take a look at what GCC has set as it's default for x86 systems. LSL uses 32-bit single-precision. I've been able to demonstrate that SL is probably using IEEE-754 floats (by writing a set of functions that convert foats to integers and back mimicing the result of a union, these functions have yet to corrupt any numbers i have fed them, if it acts like a duck, quacks like a duck, flies like a duck, swims like a duck, waddles like a duck, dives like a duck, then it most definitely is very duck like). This is consistant with what native x86 chips support. Most x86 systems do all thier work with floats, as doubles and only convert them to floats for storage purposes. If memory serves, the epsilon is used when compairing floats as to know how many bits to allow to be corrupted. It is my understanding that LSL does not allow for corrupted bits with comparisons.
_____________________
Truth is a river that is always splitting up into arms that reunite. Islanded between the arms, the inhabitants argue for a lifetime as to which is the main river. - Cyril Connolly
Without the political will to find common ground, the continual friction of tactic and counter tactic, only creates suspicion and hatred and vengeance, and perpetuates the cycle of violence. - James Nachtwey
|
|
Draco18s Majestic
Registered User
Join date: 19 Sep 2005
Posts: 2,744
|
04-15-2006 17:43
From: Travis Bjornson Yes, this bug exists in every machine that I know of when using floats. I think it's actually the CPU that does the math, and as I recall, if you take the square root of 25 on an Apple IIe, it gives a result of about 25.00000005. sqare root of 25 = 25? damn, that is one heck of a floating point error! ;P
|
|
MC Seattle
Registered User
Join date: 3 Apr 2006
Posts: 63
|
04-15-2006 22:02
From: Strife Onizuka LSL is writen in C++ using one of the commercial available compilers, (the client is writen in MSVC, they are using one of the .Net releases but not writing .Net code) The servers are probably compiled with GCC. Take a look at what GCC has set as it's default for x86 systems. LSL uses 32-bit single-precision. I've been able to demonstrate that SL is probably using IEEE-754 floats (by writing a set of functions that convert foats to integers and back mimicing the result of a union, these functions have yet to corrupt any numbers i have fed them, if it acts like a duck, quacks like a duck, flies like a duck, swims like a duck, waddles like a duck, dives like a duck, then it most definitely is very duck like). This is consistant with what native x86 chips support. Most x86 systems do all thier work with floats, as doubles and only convert them to floats for storage purposes. If memory serves, the epsilon is used when compairing floats as to know how many bits to allow to be corrupted. It is my understanding that LSL does not allow for corrupted bits with comparisons. Thanks for all the helpful info (the above post was me accidentally logged in on the G/F's account). I don't have a use right now for the macheps but it will probably come in handy sooner or later. When I was writing regression analysis software the macheps was necessary to tell if the result of a function was significantly different than zero.
|