Discussion:
[Maxima-commits] [git] Maxima CAS branch master updated. branch-5_40-base-83-ged20729
(too old to reply)
Raymond Toy
2017-06-26 18:29:08 UTC
Permalink
Raw Message
On Sun, Jun 25, 2017 at 2:25 AM, Andreas Eder via Maxima-commits <
commit e6a7a8af637ee29f549b1dd47633fb08172e17ae
Date: Sun Jun 25 11:06:41 2017 +0200
replaced uses of the half macro with the variable 1//2.
had to adapt relative error bounds for the erf-inverse_erf tests
sligthly.
​This makes no sense to me. Why should the results be different?

This needs some investigation. Is it because the defvar version wasn't
simplified?​
--
Ray
Andreas Eder
2017-06-26 21:02:07 UTC
Permalink
Raw Message
Post by Raymond Toy
On Sun, Jun 25, 2017 at 2:25 AM, Andreas Eder via Maxima-commits <
commit e6a7a8af637ee29f549b1dd47633fb08172e17ae
Date: Sun Jun 25 11:06:41 2017 +0200
replaced uses of the half macro with the variable 1//2.
had to adapt relative error bounds for the erf-inverse_erf tests
sligthly.
​This makes no sense to me. Why should the results be different?
This needs some investigation. Is it because the defvar version wasn't
simplified?​
No, it is simplified.
The defvar is: (defvar 1//2 '((rat simp) 1 2))
That is, as far as I can tell, simplified.
What I suspect is a different order of evaluation. The macro
version is
expande into a literal at macroexpansion time, whereas the defvar
generates a reference to the variable at compile time. The might
lead to
different orders of evaluation. But I'm no expert in these
matters.
The difference is very small and I always expect such things when
dealing with floating point.

'Andreas
--
ceterum censeo redmondinem esse delendam
Raymond Toy
2017-06-26 22:08:50 UTC
Permalink
Raw Message
Post by Raymond Toy
On Sun, Jun 25, 2017 at 2:25 AM, Andreas Eder via Maxima-commits <
commit e6a7a8af637ee29f549b1dd47633fb08172e17ae
Date: Sun Jun 25 11:06:41 2017 +0200
replaced uses of the half macro with the variable 1//2.
had to adapt relative error bounds for the erf-inverse_erf
tests
sligthly.
​This makes no sense to me. Why should the results be different?
This needs some investigation. Is it because the defvar version wasn't
simplified?​
Andreas> No, it is simplified.
Andreas> The defvar is: (defvar 1//2 '((rat simp) 1 2))
Andreas> That is, as far as I can tell, simplified.
Andreas> What I suspect is a different order of evaluation. The macro version is
Andreas> expande into a literal at macroexpansion time, whereas the defvar
Andreas> generates a reference to the variable at compile time. The might lead
Andreas> to
Andreas> different orders of evaluation. But I'm no expert in these matters.
Andreas> The difference is very small and I always expect such things when
Andreas> dealing with floating point.

That might be true, but in the tests, there shouldn't be any
difference. The VALUE being used should be exactly the same in the
numeric tests as before.

It's ok to change the thresholds when it makes sense, but I just don't
see how this is one of those cases. You didn't change the value of
anything.

What really caused the thresholds to change? This troubles me a
lot. (Can't investigate myself right now.)

--
Ray
Gunter Königsmann
2017-06-27 05:44:52 UTC
Permalink
Raw Message
Post by Raymond Toy
Post by Raymond Toy
On Sun, Jun 25, 2017 at 2:25 AM, Andreas Eder via Maxima-commits <
commit e6a7a8af637ee29f549b1dd47633fb08172e17ae
Date: Sun Jun 25 11:06:41 2017 +0200
replaced uses of the half macro with the variable 1//2.
had to adapt relative error bounds for the erf-inverse_erf tests
sligthly.
​This makes no sense to me. Why should the results be different?
This needs some investigation. Is it because the defvar version wasn't
simplified?​
Andreas> No, it is simplified.
Andreas> The defvar is: (defvar 1//2 '((rat simp) 1 2))
Andreas> That is, as far as I can tell, simplified.
Andreas> What I suspect is a different order of evaluation. The macro version is
Andreas> expande into a literal at macroexpansion time, whereas the defvar
Andreas> generates a reference to the variable at compile time. The might lead
Andreas> to
Andreas> different orders of evaluation. But I'm no expert in these matters.
Andreas> The difference is very small and I always expect such things when
Andreas> dealing with floating point.
That might be true, but in the tests, there shouldn't be any
difference. The VALUE being used should be exactly the same in the
numeric tests as before.
If the cause for the change is indeed the evaluation order my next
question would be: Did the change increase the accuracy (after all there
is no guarantee that the value originally contained in the testbench was
even correct before the change)? And does it do so in the majority of cases?
When designing digital filters I always keep an eye on the order the
operations are executed in in: Most of the times there is one order
which is prone for integer overflows and one that is prone for loosing
accuracy.

Kind regards,

Gunter.
Raymond Toy
2017-06-27 15:24:24 UTC
Permalink
Raw Message
Post by Raymond Toy
Post by Raymond Toy
On Sun, Jun 25, 2017 at 2:25 AM, Andreas Eder via Maxima-commits <
commit e6a7a8af637ee29f549b1dd47633fb08172e17ae
Date: Sun Jun 25 11:06:41 2017 +0200
replaced uses of the half macro with the variable 1//2.
had to adapt relative error bounds for the erf-inverse_erf tests
sligthly.
​This makes no sense to me. Why should the results be different?
This needs some investigation. Is it because the defvar version wasn't
simplified?​
Andreas> No, it is simplified.
Andreas> The defvar is: (defvar 1//2 '((rat simp) 1 2))
Andreas> That is, as far as I can tell, simplified.
Andreas> What I suspect is a different order of evaluation. The macro version is
Andreas> expande into a literal at macroexpansion time, whereas the defvar
Andreas> generates a reference to the variable at compile time. The might lead
Andreas> to
Andreas> different orders of evaluation. But I'm no expert in these matters.
Andreas> The difference is very small and I always expect such things when
Andreas> dealing with floating point.
Post by Raymond Toy
That might be true, but in the tests, there shouldn't be any
difference. The VALUE being used should be exactly the same in the
numeric tests as before.
Gunter> If the cause for the change is indeed the evaluation order my next
Gunter> question would be: Did the change increase the accuracy (after all there
Gunter> is no guarantee that the value originally contained in the testbench was
Gunter> even correct before the change)? And does it do so in the majority of cases?

Well, since the thresholds are upper bounds, if the error decreased,
you'd never know from the test. So changing the threshold meant the
error got worse.

I didn't check to see exactly what was changed, but I assume Andreas
just removed the macro for 1//2 so that the defvar was used. That
shouldn't change the order of evaluation of anything.

This really needs to be investigated. It could be something simple,
or perhaps something really bad---we just don't know.

As I said, I can't personally look for a couple of weeks.

--
Ray
Andreas Eder
2017-06-27 19:25:17 UTC
Permalink
Raw Message
On Di 27 Jun 2017 at 07:44, Gunter Königsmann
Post by Gunter Königsmann
If the cause for the change is indeed the evaluation order my
next
question would be: Did the change increase the accuracy (after
all there
is no guarantee that the value originally contained in the
testbench was
even correct before the change)? And does it do so in the
majority of cases?
I#m not really sure it is the evaluation order, but at the moment
this
is my best guess.
I had to loosen the relatice error bounds for two test in
rtest_gamma.mac from 5.9b-24 to 7.8b-24 and from 6.6b-25 to
8.3b-25
where fpprec was set to 24.
The relevant change is in rpart.lisp in the function
risplit-expt-sqrt-pow where the macros (1//2) and (half) were
changed to
the variable 1//2.

Andreas
--
ceterum censeo redmondinem esse delendam
Robert Dodier
2017-06-28 03:50:01 UTC
Permalink
Raw Message
Post by Raymond Toy
What I suspect is a different order of evaluation. The macro version
is expande into a literal at macroexpansion time, whereas the defvar
generates a reference to the variable at compile time. The might lead
to different orders of evaluation. But I'm no expert in these matters.
The difference is very small and I always expect such things when
dealing with floating point.
The premise of making nonfunctional changes to the code is that they
won't have any effect. We can be sure that we haven't caused any trouble
because the results are exactly the same as before.

If the results differ in any respect, well, all bets are off. Does it
work better now? Is it a bug? Does it make no difference? We won't know
until the problem is studied in detail.

I generally don't have a strong opinion about such changes, as long as
they have, in fact, no effect on results. But if the results change,
then someone, probably the author of the changes, has to ensure that
the changes are harmless or positive.

At this point I would like to ask you to revert any changes which caused
results to change, pending analysis of the changed results. As an
implementation detail, I believe it's generally recommended to revert
changes which have been published by committing another patch, so there
are 2 commits, one to do, and one to undo. Instead of deleting the
commit via git reset, which rewrites history, as they say.

best

Robert Dodier

Loading...